how to use a secure docker registry mirror with docker-machine? - certificate

I have a docker distribution v2 registry which i'm using as a mirror. It works using Docker for Mac Community Edition 17.03.1-ce-mac12 (17661), but I am getting a "certificate signed by unknown authority" error when accessing it from a docker-machine node. The setup is as follows:
openssl req -newkey rsa:4096 -nodes -sha256 -keyout "/certs/domain.key" -x509 -days "365" -out "/certs/domain.cert" -subj /CN=“192.168.17.11”
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain $DIR/devenv/domain.cert
docker run -d --restart=always -p 6000:5000 --name registry-mirror -v /Volumes/Data/registry_cache:/var/lib/registry registry:2 /var/lib/registry/config.yml
docker pull busybox
curl -k https://192.168.17.11:6000/v2/_catalog
{"repositories":["library/busybox"]}
docker-machine create -d virtualbox —engine-registry-mirror https://192.168.17.11:6000 mynode
docker-machine ssh mynode "sudo mkdir -p /etc/docker/certs.d/192.168.17.11:6000 && sudo chmod -R 777 /etc/docker/certs.d"
docker-machine scp $DIR/devenv/domain.cert mynode:/etc/docker/certs.d/192.168.17.11:6000/domain.cert
docker-machine scp $DIR/devenv/domain.key mynode:/etc/docker/certs.d/192.168.17.11:6000/domain.key
docker-machine restart mynode
eval $(docker-machine env mynode)
docker info
… Registry Mirrors:
https://192.168.17.11:6000/
docker pull busybox
cat /var/log/docker.log
… time="2017-05-30T12:33:01.593516721Z" level=debug msg="Trying to pull busybox from https://192.168.17.11:6000/ v2"
time="2017-05-30T12:33:02.539391694Z" level=warning msg="Error getting v2 registry: Get https://192.168.17.11:6000/v2/: x509: certificate signed by unknown authority"
I'm not sure how to get the boot2docker VM to accept the certificate used by the docker duplication v2 registry mirror. Other examples copy a ca.crt for the certificate authority into /etc/certs.d/, but this certificate is self-signed.

A reboot of the OSX box hosting this setup appears to have resolved the problem and for now, pending time to pin it down, this is Good Enough.

Related

Connecting VSCode and GCP

I have a google cloud compute instance that I connect to with
INSTANCE_NAME='sam_vm'
gcloud beta compute ssh --zone "us-central1-a" $INSTANCE_NAME --project sam_project
when I try to connect with cmd-p then typing in that command, I get:
ssh: Could not resolve hostname gcloud beta compute ssh --zone "us-central1-a" "shleifer-v1-vm" --project $hf_proj: nodename nor servname provided, or not known
How can I connect?
run
gcloud compute config-ssh
then in the vscode remote-ssh popup,
ssh {instance_name}.{zone}.{project_name}
Even though it has been 2 years since this questions has been asked, I am providing instructions for those who are still trying to figure out how to set up ssh to GCP in VSCode on macOS.
Once you have Google Cloud CLI installed and have been granted access to the VM instance, login to your Google Cloud account by running in terminal:
gcloud auth login
Setup following variables which will be used in gcloud commands:
gcloud config set project <your project name>
gcloud config set compute/zone <your GCP zone>
Then you can establish SSH tunnel between VM and your local machine:
gcloud compute ssh [YOUR-VM-NAME] --tunnel-through-iap
Setup VS Code and install Remote Development extension.
Run the following command in terminal to obtain internal SSH command used by gcloud compute ssh to connect to VM:
gcloud compute ssh [YOUR-VM-NAME] --tunnel-through-iap --dry-run
The output may look similar to following:
/usr/bin/ssh -t -i /Users/<User_ID>/.ssh/google_compute_engine -o CheckHostIP=no
-o HashKnownHosts=no -o HostKeyAlias=compute.8972912327831725343 -o IdentitiesOnly=yes
-o StrictHostKeyChecking=yes
-o UserKnownHostsFile=/Users/<User_ID>/.ssh/google_compute_known_hosts
-o "ProxyCommand /Library/Frameworks/Python.framework/Versions/3.10/bin/python3
/Library/google-cloud-sdk/lib/gcloud.py compute start-iap-tunnel <instance name>
%p --listen-on-stdin --project=<your project name> --zone=<your zone> --verbosity=warning"
-o ProxyUseFdpass=no <Gcloud_Username>#compute.8972912327831725343
Copy this SSH command and remove ‘/usr/bin/’:
ssh -t -i /Users/<User_ID>/.ssh/google_compute_engine -o CheckHostIP=no
-o HashKnownHosts=no -o HostKeyAlias=compute.8972912327831725343 -o IdentitiesOnly=yes
-o StrictHostKeyChecking=yes
-o UserKnownHostsFile=/Users/<User_ID>/.ssh/google_compute_known_hosts
-o "ProxyCommand /Library/Frameworks/Python.framework/Versions/3.10/bin/python3
/Library/google-cloud-sdk/lib/gcloud.py compute start-iap-tunnel <instance name>
%p --listen-on-stdin --project=<your project name> --zone=<your zone> --verbosity=warning"
-o ProxyUseFdpass=no <Gcloud_Username>#compute.8972912327831725343
In VS Code:
Press 'Cmd' + 'Shift' + 'P'
Select 'Remote-SSH: Add New Host'
Press 'Enter'
Select the config file /Users/<User_ID>/.ssh/config
Press 'Cmd' + 'Shift' + 'P'
Select 'Remote-SSH: Connect Host'
Select Compute instance as Linux.
VS Code should establish SSH tunnel to the VM and you should be able to open root folder.
Hope that this will help someone save hours of efforts.
First execute gcloud compute config-ssh
Open ~/.ssh/config file
Find
Host {instance_name}.{zone}.{project_name}
HostName {ip_name_google_provided}
IdentityFile /Users/youruser/.ssh/google_compute_engine
UserKnownHostsFile=/Users/youruser/.ssh/google_compute_known_hosts
HostKeyAlias=compute.123423
IdentitiesOnly=yes
CheckHostIP=no
and convert to
Host {ip_name_google_provided}
HostName {ip_name_google_provided}
IdentityFile /Users/youruser/.ssh/google_compute_engine
UserKnownHostsFile=/Users/youruser/.ssh/google_compute_known_hosts
HostKeyAlias=compute.123423
IdentitiesOnly=yes
CheckHostIP=no
Simply use ip ({ip_name_google_provided}) at HostName as Host name and try to connect {ip_name_google_provided} as a solution.
Note: Problem is computer can not resolve DNS {instance_name}.{zone}.{project_name} from ssh config file.

Kompose up username and password authentication

I'm experimenting with using kompose on k3s to turn the compose file into a K8s file, but when I type kompose up, it asks me to enter a username and password, but I don't know what to write.
The specific output is as follows
# kompose up
INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
Please enter Username: test
Please enter Password: test
FATA Error while deploying application: Get https://127.0.0.1:6443/api: x509: certificate signed by unknown authority
However, the kompose convert command was successfully executed
I would appreciate it if you could tell me how to solve it?
The kompose version is 1.21.0 (992df58d8), and install it by 'curl and chmod'
The k3s version is v1.17.3+k3s1 (5b17a175), and install it by 'install.sh script'
OS is Ubuntu18.04.3 TLS
I seem to have found my problem, because I use k3s the install.sh scripts installed by default, it will k8s configuration file stored in the /etc/rancher/k3s/k3s.yaml instead of k8s ~/.Kube/config.
This caused kompose up to fail to obtain certs.
You can use the /etc/rancher/k3s/k3s.yaml copied to ~/.Kube/config.
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
Then kompose up executes successfully.
The answer that the OP gave is for a different issue than the certificate signed by unknown authority that they posted. The certificate issue is almost certainly caused by a self signed cert. For that, you have to get your workstation's OS to accept the cert. For Linux, I use:
openssl s_client -showcerts -connect 127.0.0.1:6443 2>/dev/null </dev/null | \
sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | \
sudo tee /usr/local/share/ca-certificates/k8s.crt
sudo update-ca-certificates
sudo systemctl restart docker

put custom ssh public key into authorized_keys on minikube cluster

How to put a custom ssh public key into authorized_keys on minikube cluster? Why the changes to /home/docker/.ssh/authorized_keys lose after reboot? How to edit this file effectively?
This works (minikube v1.2.0):
minikube ssh
cd /var/lib/boot2docker
sudo cp userdata.tar userdata-backup.tar
cd /home/docker
echo YOUR_SSH_PUBLIC_KEY_HERE >> .ssh/authorized_keys
sudo tar cf /var/lib/boot2docker/userdata.tar .
because of minikube extract files from userdata.tar at boot, and the source code is here.

Invalid x509 certificate for kubernetes master

I am trying reach my k8s master from my workstation. I can access the master from the LAN fine but not from my workstation. The error message is:
% kubectl --context=employee-context get pods
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 10.161.233.80, not 114.215.201.87
How can I do to add 114.215.201.87 to the certificate? Do I need to remove my old cluster ca.crt, recreate it, restart whole cluster and then resign client certificate? I have deployed my cluster with kubeadm and I am not sure how to do these steps manually.
One option is to tell kubectl that you don't want the certificate to be validated. Obviously this brings up security issues but I guess you are only testing so here you go:
kubectl --insecure-skip-tls-verify --context=employee-context get pods
The better option is to fix the certificate. Easiest if you reinitialize the cluster by running kubeadm reset on all nodes including the master and then do
kubeadm init --apiserver-cert-extra-sans=114.215.201.87
It's also possible to fix that certificate without wiping everything, but that's a bit more tricky. Execute something like this on the master as root:
rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.161.233.80,114.215.201.87
docker rm `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
This command for new kubernetes >=1.8:
rm /etc/kubernetes/pki/apiserver.*
kubeadm alpha phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.161.233.80,114.215.201.87
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
Also whould be better to add dns name into --apiserver-cert-extra-sans for avoid issues like this in next time.
For kubeadm v1.13.3
rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=114.215.201.87
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
If you used kubespray to provision your cluster then you need to add a 'floating ip' (in your case its '114.215.201.87'). This variable is called supplementary_addresses_in_ssl_keys in the group_vars/k8s-cluster/k8s-cluster.yml file. After updating it, just re-run your ansible-playbook -b -v -i inventory/<WHATEVER-YOU-NAMED-IT>/hosts.ini cluster.yml.
NOTE: you still have to remove all the apiserver certs (rm /etc/kubernetes/pki/apiserver.*) from each of your master nodes prior to running!
Issue cause:
Your configs at $HOME/.kube/ are present with your old IP address.
Try running,
rm $HOME/.kube/* -rf
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
For Kubernetes 1.12.2/CentOS 7.4 the sequence is as follows:
rm /etc/kubernetes/pki/apiserver.*
kubeadm alpha phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=51.158.75.136
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
Use the following command:
kubeadm init phase certs all
For me when I was trying to accessing via root (after sudo -i) I got the error.
I excited and with normal user it was working.
For me the following helped:
rm -rf ~/.minikube
minikube delete
minikube start
Probably items no 2 and 3 would have been sufficient

How do I connect bare metal worker to docker TSA host

I have followed the docker-compose concourse installation set up
Everything is up and running but I cant figure out what to use as --tsa-host value in command to connect worker to TSA host
Would be worth mentioning that docker concourse web and db are running on the same machine that I hope to use as bare metal worker.
I have tried 1. to use IP address of concourse web container but no joy. I cannot even ping the docker container IP from host.
1.
sudo ./concourse worker --work-dir ./worker --tsa-host IP_OF_DOCKER_CONTAINER --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
I have also tried using the 2. CONCOURSE_EXTERNAL_URL and 3. the ip address of the host but no luck either.
2.
sudo ./concourse worker --work-dir ./worker --tsa-host http://10.XXX.XXX.XX:8080 --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
3.
sudo ./concourse worker --work-dir ./worker --tsa-host 10.XXX.XXX.XX:8080 --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
Other details of setup:
Mac OSX Sierra
Docker For Mac
Please confirm you use the internal IP of the host, not public IP, not container IP.
--tsa-host <INTERNAL_IP_OF_HOST>
If you use docker-compose.yml as in its setup document, you needn't care of TSA-HOST, the environmen thas been defined already
CONCOURSE_TSA_HOST: concourse-web
I used the docker-compose.yml recently with the steps described here https://concourse-ci.org/docker-repository.html .
Please confirm that there is a keys directory next to the docker-compose.yml after you executed the steps.
mkdir -p keys/web keys/worker
ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ''
ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ''
ssh-keygen -t rsa -f ./keys/worker/worker_key -N ''
cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys
cp ./keys/web/tsa_host_key.pub ./keys/worker
export CONCOURSE_EXTERNAL_URL=http://192.168.99.100:8080
docker-compose up