Install certificate on Centos 7 for docker registry access - centos

We currently have a docker registry setup, that has security. Normally, in order to access it, from a developer's perspective, I have to do a long with the docker login --username=someuser --password=somepassword --email user#domain.com https://docker-registry.domain.com.
However, since I am currently trying to do an automatized deployment of a docker container in the cloud, one of the operations, which is the docker pull command, fails because the login was not performed (it works if I add the login in the template, but that's bad).
I was suggested to use the certificate to allow the pull from being done (.crt file). I tried installing the certificate using the steps explained here: https://www.linode.com/docs/security/ssl/ssl-apache2-centos
But it does not seem to work, I still have to do a manual login in order to be able to perform my docker pull from the registry.
Is there a way I can replace the login command by the use of the certificate?

As I see, it's wrong URL for SSL authentication between docker server and private registry server.
You can follow this:
Running a domain registry
While running on localhost has its uses, most people want their registry to be more widely available. To do so, the Docker engine requires you to secure it using TLS, which is conceptually very similar to configuring your web server with SSL.
Get a certificate
Assuming that you own the domain myregistrydomain.com, and that its DNS record points to the host where you are running your registry, you first need to get a certificate from a CA.
Create a certs directory:
mkdir -p certs
Then move and/or rename your crt file to: certs/domain.crt, and your key file to: certs/domain.key.
Make sure you stopped your registry from the previous steps, then start your registry again with TLS enabled:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
You should now be able to access your registry from another docker host:
docker pull ubuntu
docker tag ubuntu myregistrydomain.com:5000/ubuntu
docker push myregistrydomain.com:5000/ubuntu
docker pull myregistrydomain.com:5000/ubuntu
Gotcha
A certificate issuer may supply you with an intermediate certificate. In this case, you must combine your certificate with the intermediate's to form a certificate bundle. You can do this using the cat command:
cat domain.crt intermediate-certificates.pem > certs/domain.crt

Related

How to reset CapRover password?

I just installed CapRover on my server and I forgot my password 🤦‍♂️
but I still have access via SSH normally.
How could I reset it?
You can run these commands as the documentation mentioned:
docker service scale captain-captain=0
Backup config
cp /captain/data/config-captain.json /captain/data/config-captain.json.backup
Delete old password
jq 'del(.hashedPassword)' /captain/data/config-captain.json > /captain/data/config-captain.json.new
cat /captain/data/config-captain.json.new > /captain/data/config-captain.json
rm /captain/data/config-captain.json.new
Set a temporary password
docker service update --env-add DEFAULT_PASSWORD=captain42 captain-captain
docker service scale captain-captain=1

docker-compose pull gives either a gpg error or a permissions error when I attempt to use it with or without sudo

everyone.
I hope that someone can help to answer my question.
I am joining a project in which I have to use various docker containers. I was told that I just needed to use docker-compose to pull down all the necessary containers. I tried this, and got two different errors, based on whether I used sudo or not. My machine is Ubuntu bionic beaver 18.04.4LTS
I have docker-engine installed according to the installation instructions for Bionic on the github page, and docker-compose is likewise installed according to its instructions. I did not create a "docker" group since I have sudo access.
We have two repos that I have to log in to before I can do anything. In order to prevent my passwords from being stored unencrypted in config.json, I followed this guide to set up a secure credential store:
https://www.techrepublic.com/article/how-to-setup-secure-credential-storage-for-docker/
However, rather than asking me for the password and/or passphrase mentioned in this article, the login process makes me enter the actual passwords to the repos. So, the secure credential store may not be working, which might be causing the problem.
At any rate, once I log in and the two commands show login succeeded, I then try to do a
docker-compose pull
on the repos. When I do
sudo docker-compose pull
I get this final error:
docker.errors.DockerException: Credentials store error: StoreError('Credentials store docker-credential-pass exited with "exit status 2: gpg: WARNING: unsafe ownership on homedir '/home/myuser/.gnupg'\ngpg: decryption failed: No secret key".')
an ls of the .gnupg directory is
myuser#myhost$ ls -lA ~ | grep gnupg
drwx------ 4 myuser myuser 226 Feb 9 13:35 .gnupg
gpg --list-secret-keys shows my keypair when I run it as myuser.
I am assuming that what is happening is that because I am running as sudo the user trying to access this directory is root, not myuser, and so it is failing. However, if I leave off the sudo
docker-compose pull
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))
I am guessing that this is because my normal user doesn't have the ability to connect to the docker daemon's Unix socket.
So, how do I make these play together? Is the answer to add a docker group so that the command still runs as myuser and not as root? or is there another way to do this?
Also, why is my credential store not asking me for the password set by docker-credential-pass or the GPG passphrase? I suspect these two are related. Perhaps the pull is trying to send my authentication tokens over again and can't because it doesn't have access to the secure credentials store.
All of the above are guesses. Does anyone know what is going on here?
Thanking you in advance,
Brad
I just wanted to follow up with a solution to this question that worked for me.
Firstly, you need to add your user to the docker group that was created during docker-engine's installation.
sudo usermod --append --groups docker your_user_name
Because I had already used sudo to try this, there were a few files that ended up being created by root.
So, you have to chown a few things.
sudo chown your_user_name:your_group_name ~/.docker/config.json
Note that for the group name I used
docker
but I'm not sure if that's necessary.
Then, there were files inside the ~/.password-store directory that needed to be changed.
sudo chown -R your_user_name:your_group_name ~/.password-store
Most of these files are already owned by you, but the recorded credentials are not.
Then, the magic that fixed it all. From
https://ask.csdn.net/questions/5153956
you have to do this.
export GPG_TTY=$(tty)
and it is this last that makes gpg work.
Then, you can log in to your repos if you have to without using sudo
docker login -u repo_user_name your_repo_host
and then log in with your repo password.
Note that I don't know why you have to use the repo password instead of using the stored credentials.
Once you log in, you should be able to do a
docker-compose pull
without sudo
from the directory where you want the containers to be placed.
Note that you will probably have to provide your GPG passphrase at first. I'm not sure about this because I had already unlocked the key by following the steps in the above link to check to see if docker-credential-pass had the right credential store password stored.
and that should do it.

GitLab Runner won't connect to other ec2, via scp ERROR: debug1: read_passphrase: can't open /dev/tty: No such device or address

Currently, I'm trying to use a docker image gitlab file inorder to connect to my production server and overwrite my code within the production on deployment. While I can ssh from my local machine with the private key given, whenever I try to copy the private key as a variable and connect I consistently get the error of:
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
...
debug1: read_passphrase: can't open /dev/tty: No such device or address
Host key verification failed.
lost connection
I've verified that the /dev/tty address exists on both machines and that my .pem can be read appropriately on the gitlab runner post copying, I've established appropriate permissions with chmod and also have tried multiple permutations of calling the scp script. I'm currently running my connection within the before_script of my gitlab.yml file to avoid the delay of building the docker images within my file and the relevant portion is enclosed below.
EDIT: /dev/tty also has the correct permissions, I've viewed the previous stack overflow posts related to this issue and they either weren't relevant to the problem or weren't the solution
image: docker:19.03.5
services:
- docker:19.03.1-dind
before_script:
- docker info
- apk update
- apk add --no-cache openssh
- touch $SSH_KEY_NAME
- echo "$SSH_KEY" > "./$SSH_KEY_NAME"
- chmod 700 $SSH_KEY_NAME
- ls -la /dev/tty
- scp -v -P 22 $SSH_KEY_NAME -i $SSH_KEY_NAME $PROD_USER#$SERVER_URL:.
Apologies if it feels dumb, but I have little experience within the technical nature of private key setup from another machine, currently I'm unsure if I need to link the private key within my gitlab runner in a specific way? If it's possible that the echo isn't saving the .pem as a private key. My IP inbound for the aws instance is set for all traffic on port 22, and copying this key and connecting from my PC works fine. It's just the runner that has problems. Thanks for your help!
The best solution I found to this is to either run an ubuntu gitlab-image and manually call docker inside it with a volume tied to ssh or use a aws instance with a password to gitlab secrets if you're hardpressed on the dind image in gitlab. Neither is truly optimal, but due to the containerization's effectiveness in isolation, you have to resolve it in one of the two method's.

how to mount secret in openshift with uid:gid set correctly

I'm using this Dockerfile to deploy it on openshift. - https://github.com/sclorg/postgresql-container/tree/master/9.5
It works fine, until I enabled ssl=on and injected the server.crt and server.key file into the postgres pod via volume mount option.
Secret is created like
$ oc secret new postgres-secrets \
server.key=postgres/server.key \
server.crt=postgres/server.crt \
root-ca.crt=ca-cert
The volume is created as bellow and attached to the given BuidlConfig of postgres.
$ oc volume dc/postgres \
--add --type=secret \
--secret-name=postgres-secrets \
--default-mode=0600 \
-m /var/lib/pgdata/data/secrets/secrets/
Problem is the mounted files of secret.crt and secret.key files is owned by root user, but postgres expect it should be owned by the postgres user. Because of that the postgres server won't come up and says this error.
waiting for server to start....FATAL: could not load server
certificate file "/var/lib/pgdata/data/secrets/secrets/server.crt":
Permission denied stopped waiting pg_ctl: could not start server
How we can insert a volume and update the uid:guid of the files in it ?
It looks like this is not trivial, as it requires to set Volume Security Context so all the containers in the pod are run as a certain user https://docs.openshift.com/enterprise/3.1/install_config/persistent_storage/pod_security_context.html
In the Kubernetes projects, this is something that is still under discussion https://github.com/kubernetes/kubernetes/issues/2630, but seems that you may have to use Security Contexts and PodSecurityPolicies in order to make it work.
I think the easiest option (without using the above) would be to use a container entrypoint that, before actually executing PostgreSQL, it chowns the files to the proper user (postgres in this case).

Add SSH-Key for nginx user (for github)

I'm running into a bit of issues with nginx and SSH keys.
I need to add a ssh key for the nginx user to access private github repositories and then run the "git ..." commands to pull or clone the repo onto my Ubuntu box.
With the nginx user just being a worker task is it possible to generate a key for this user?
Thanks for any help!
You can run commands as another user without having to provide their password using sudo:
$ sudo -u nginx ssh-keygen -t rsa -C "email#address.com"