I want to use Minikube for local development. It needs to access my companies internal docker registry which is signed w/ a 3rd party certificate.
Locally, I would copy the cert and run update-ca-trust extract or update-ca-certificates depending on the OS.
For the Minikube vm, how do I get the cert installed, registered, and the docker daemon restarted so that docker pull will trust the server?
I had to do something similar recently. You should be able to just hop on the machine with minikube ssh and then follow the directions here
https://docs.docker.com/engine/security/certificates/#understanding-the-configuration
to place the CA in the appropriate directory (/etc/docker/certs.d/[registry hostname]/). You shouldn't need to restart the daemon for it to work.
Well, the minikube has a feature to copy all the contents of ~/.minikube/files directory to its VM filesystem. So you can place your certificates under
~/.minikube/files/etc/docker/certs.d/<docker registry host>:<docker registry port> path
and these files will be copied into the proper destination on minikube startup automagically.
Shell into Minikube.
Copy your certificates to:
/etc/docker/certs.d/<docker registry host>:<docker registry port>
Ensure that your permissions are correct on the certificate, they must be at least readable.
Restart Docker (systemctl restart docker)
Don't forget to create a secret if your Docker Registry uses basic authentication:
kubectl create secret docker-registry service-registry --docker-server=<docker registry host>:<docker registry port> --docker-username=<name> --docker-password=<pwd> --docker-email=<email>
Have you checked ImagePullSecrets.
You can create a secret with your cert and let your pod use it.
By starting up the minikube with the following :
minikube start --insecure-registry=internal-site.dev:5244
It will start the docker daemon with the --insecure-registry option :
/usr/local/bin/docker daemon -D -g /var/lib/docker -H unix:// -H tcp://0.0.0.0:2376 --label provider=virtualbox --insecure-registry internal-site.dev:5244 --tlsverify --tlscacert=/var/lib/boot2docker/ca.pem --tlscert=/var/lib/boot2docker/server.pem --tlskey=/var/lib/boot2docker/server-key.pem -s aufs
but this expects the connection to be HTTP. Unlike in the Docker registry documentation Basic auth does work, but it needs to be placed in a imagePullSecret from the Kubernetes docs.
I would also recommend reading "Adding imagePulSecrets to service account" (link on the page above) to get the secret added to all pods as they are deployed. Note that this will not impact already deployed pods.
One option that works for me is to run a k8s job to copy the cert to the minikube host...
This is what I used to trust the harbor registry I deployed into my minikube
cat > update-docker-registry-trust.yaml << END
apiVersion: batch/v1
kind: Job
metadata:
name: update-docker-registry-trust
namespace: harbor
spec:
template:
spec:
containers:
- name: update
image: centos:7
command: ["/bin/sh", "-c"]
args: ["find /etc/harbor-certs; find /minikube; mkdir -p /minikube/etc/docker/certs.d/core.harbor-${MINIKUBE_IP//./-}.nip.io; cp /etc/harbor-certs/ca.crt /minikube/etc/docker/certs.d/core.harbor-${MINIKUBE_IP//./-}.nip.io/ca.crt; find /minikube"]
volumeMounts:
- name: harbor-harbor-ingress
mountPath: "/etc/harbor-certs"
readOnly: true
- name: docker-certsd-volume
mountPath: "/minikube/etc/docker/"
readOnly: false
restartPolicy: Never
volumes:
- name: harbor-harbor-ingress
secret:
secretName: harbor-harbor-ingress
- name: docker-certsd-volume
hostPath:
# directory location on host
path: /etc/docker/
# this field is optional
type: Directory
backoffLimit: 4
END
kubectl apply -f update-docker-registry-trust.yaml
You should copy your root certificate to $HOME/.minikube/certs and restart the minikube with --embed-certs flag.
For more details please refer to minikube handbook: https://minikube.sigs.k8s.io/docs/handbook/untrusted_certs/
As best as I can tell, there is no way to do this. The next best option is to use the insecure-registry option at startup.
minikube --insecure-registry=foo.com:5000
Related
I am trying to deploy an image from my private registry that's hosted on my local network and pointed using my local machines /etc/hosts file.
I am getting the resolution error as below:
Failed to pull image "gitlab.example.com:5050/group/project:latest": rpc error: code = Unknown desc = failed to resolve image
My /etc/hosts file contains:
192.168.1.100 gitlab.example.com
Using docker the pull/push works perfectly fine as the resolution happens using /etc/hosts
I've tried editing corefile of coredns to make the resolution happen, but it isn't working.
Can someone point me in right direction over here.
You can try to use hostAliases 💡 and add a host entry to your pod. Kubernetes will not honor the /etc/hosts file from the hosts/nodes. Either the one in the pod or resolves through CoreDNS. For example:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "192.168.1.100" 👈
hostnames:
- "gitlab.example.com" 👈
containers:
- name: gitlab-hosts
image: myimage
command:
- mygitlabjob
args:
- "arg1"
Update:
I think see the problem here. microk8s on Ubuntu runs in a snap. That means it's confined/sandboxed in a container of its own. This also means that it probably doesn't care about your machine's /etc/hosts file. Unfortunately, snap's file systems are mounted as read-only for security reasons and to prevent tampering.
○ → pwd
/snap/microk8s/current
○ → sudo touch hosts
touch: cannot touch 'hosts': Read-only file system
If you'd like to use a private registry this way, some recommendations:
Ask your system admin to add that entry into your local DNS server, or add it if you are the system admin.
Use an alternative small K8s distro that uses Docker.
KinD
minikube
Build your own microk8s snap with a modified /etc/hosts file (hard)
I am trying to configure SSL for the Kubernetes Dashboard. Unfortunately I receive the following error:
2020/07/16 11:25:44 Creating in-cluster Sidecar client
2020/07/16 11:25:44 Error while loading dashboard server certificates. Reason: open /certs/tls.crt: no such file or directory
volumeMounts:
- name: certificates
mountPath: /certs
# Create on-disk volume to store exec logs
I think that /certs should be mounted, but where should it be mounted?
Certificates are stored as secrets. Then secret can be used and mounted in a deployment.
So in your example it would look something like this:
...
volumeMounts:
- name: certificates
mountPath: /certs
# Create on-disk volume to store exec logs
...
volumes:
- name: certificates
secret:
secretName: certificates
...
This is just a short snipped of the whole process of setting up Kubernetes Dashboard v2.0.0 with recommended.yaml.
If you did used the recommended.yaml then certs are created automatically and stored in memory. Deployment is being created with args : -auto-generate-certificates
I also recommend reading How to expose your Kubernetes Dashboard with cert-manager as it might be helpful to you.
There already was an issue submitted with a simmilar problem as yours Couldn't read CA certificate: open : no such file or directory #2518 but it's regarding Kubernetes v1.7.5
If you have any more issues let me know I'll update the answer if you provide more details.
I've deployed an docker registry inside my kubernetes:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
registry-docker-registry ClusterIP 10.43.39.81 <none> 443/TCP 162m
I'm able to pull images from my machine (service is exposed via an ingress rule):
$ docker pull registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd
...
Status: Downloaded newer image for registry-do...
When I'm trying to test it in order to deploy my image into the same kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: covid-backend
namespace: skaffold
spec:
replicas: 3
selector:
matchLabels:
app: covid-backend
template:
metadata:
labels:
app: covid-backend
spec:
containers:
- image: registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd
name: covid-backend
ports:
- containerPort: 8080
Then, I've tried to deploy it:
$ cat pod.yaml | kubectl apply -f -
However, kubernetes isn't able to reach registry:
Extract of kubectl get events:
6s Normal Pulling pod/covid-backend-774bd78db5-89vt9 Pulling image "registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd"
1s Warning Failed pod/covid-backend-774bd78db5-89vt9 Failed to pull image "registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": rpc error: code = Unknown desc = failed to pull and unpack image "registry-docker-registry.registry/skaffold-covid-backend#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": failed to resolve reference "registry-docker-registry.registry/skaffold-covid-backend#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": failed to do request: Head https://registry-docker-registry.registry/v2/skaffold-covid-backend/manifests/sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd: dial tcp: lookup registry-docker-registry.registry: Try again
1s Warning Failed pod/covid-backend-774bd78db5-89vt9 Error: ErrImagePull
As you can see, kubernetes is not able to get access to the internal deployed registry...
Any ideas?
I would recommend to follow docs from k3d, they are here.
More precisely this one
Using your own local registry
If you don't want k3d to manage your registry, you can start it with some docker commands, like:
docker volume create local_registry
docker container run -d --name registry.local -v local_registry:/var/lib/registry --restart always -p 5000:5000 registry:2
These commands will start you registry in registry.local:5000. In order to push to this registry, you will need to add the line at /etc/hosts as we described in the previous section . Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you must connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.local. And then you can check you local registry.
Pushing to your local registry address
The registry will be located, by default, at registry.local:5000 (customizable with the --registry-name and --registry-port parameters). All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname but also be resolved from your host.
The easiest solution for this is to add an entry in your /etc/hosts file like this:
127.0.0.1 registry.local
Once again, this will only work with k3s >= v0.10.0 (see the section below when using k3s <= v0.9.1)
Local registry volume
The local k3d registry uses a volume for storying the images. This volume will be destroyed when the k3d registry is released. In order to persist this volume and make these images survive the removal of the registry, you can specify a volume with the --registry-volume and use the --keep-registry-volume flag when deleting the cluster. This will create a volume with the given name the first time the registry is used, while successive invocations will just mount this existing volume in the k3d registry container.
Docker Hub cache
The local k3d registry can also be used for caching images from the Docker Hub. You can start the registry as a pull-through cache when the cluster is created with --enable-registry-cache. Used in conjuction with --registry-volume/--keep-registry-volume can speed up all the downloads from the Hub by keeping a persistent cache of images in your local machine.
Testing your registry
You should test that you can
push to your registry from your local development machine.
use images from that registry in Deployments in your k3d cluster.
We will verify these two things for a local registry (located at registry.local:5000) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker's documentation for this).
Firstly, we can download some image (like nginx) and push it to our local registry with:
docker pull nginx:latest
docker tag nginx:latest registry.local:5000/nginx:latest
docker push registry.local:5000/nginx:latest
Then we can deploy a pod referencing this image to your cluster:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test-registry
labels:
app: nginx-test-registry
spec:
replicas: 1
selector:
matchLabels:
app: nginx-test-registry
template:
metadata:
labels:
app: nginx-test-registry
spec:
containers:
- name: nginx-test-registry
image: registry.local:5000/nginx:latest
ports:
- containerPort: 80
EOF
Then you should check that the pod is running with kubectl get pods -l "app=nginx-test-registry".
Additionaly there are 2 github links worth visting
K3d not able resolve dns
You could try to use an answer provided by #rjshrjndrn, might solve your issue with dns.
docker images are not pulled from docker repository behind corporate proxy
Open github issue on k3d with same problem as yours.
I am having some real trouble getting my pods to pull images from a private docker registry that I have setup and am able to authenticate to (I can do docker login https://my.website.com/ and I get Login Succeeded without having to put in my username:password) (I am able to run docker pull my.website.com:5000/human/forum and see all the layers being downloaded.) .
I use https://github.com/bazelbuild/rules_k8s#aliasing-eg-k8s_deploy where I specify the namespace to be "default".
I made sure to put "HTTPS://my.website.com:5000/V2/" (in lowercase) in the auth section in the docker config file before I generated the regcred secret.
Notice that I specify the imagePullSecrets below:
# deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: angular-bazel-example-prod
spec:
replicas: 1
template:
metadata:
labels:
app: angular-bazel-example-prod
spec:
containers:
- name: angular-bazel-example
image: human/forum:dev
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred # Notice
I made sure to update my certificate authority certificates:
cp /etc/docker/certs.d/my.website.com\:5000/ca.crt /usr/local/share/ca-certificates/my.website.registry.com/
sudo update-ca-certificates
I see sudo curl --user testuser:testpassword --cacert /usr/local/share/ca-certificates/my.website.registry.com/ca.crt -X GET https://mywebsite.com:5000/v2/_catalog
> {"repositories":["human/forum"]}
I see sudo curl --user testuser:testpassword --cacert /usr/local/share/ca-certificates/mywebsite.registry.com/ca.crt -X GET https://mywebsite.com:5000/v2/human/forum/tags/list
> {"name":"a/repository","tags":["dev"]}
There must be a way to troubleshoot this but I don't know how.
One thing I am curious about is
kubectl describe pod my-first-pod...
...
Volumes:
default-token-mtz9g:
Type: Secret
Where can I find this volume? I can't kubectl exec into a container because none is running.. because the pod can't pull the image.
Do you have any ideas on how I could troubleshoot this?
Thank you!
Slackware
Create a kubernetes secret to access the custom repository. One way is to provide the server, user and password manually. Documentation link.
kubectl create secret docker-registry $YOUR_REGISTRY_NAME --docker-server=https://$YOUR_SERVER_DNS/v2/ --docker-username=$YOUR_USER --docker-password=$YOUR_PASSWORD --docker-email=whatever#gmail.com --namespace=default
Then use it in your yaml
...
imagePullSecrets:
- name: $YOUR_REGISTRY_NAME
Regarding the default-token that you see mounted, it belongs to the service account that your pod uses to talk to the kubernetes api. You can find the service account by running kubectl get sa and kubectl describe sa $DEFAULT_SERVICE_ACCOUNT_ID. Find the token by running kubectl get secrets and kubectl describe secret $SECRET_ID. To clarify this service account and token have nothing to do with the docker registry unless you specify it. To include the registry in the service account follow this guide link
We have created a nexus3 docker host private registry on CentOS machine and same ip details updated on daemon.json under docker folder.
Docker pull and push is working fine.
Same image while trying to kubernetes deploy is failing with image pull state.
$ Kubectl run deployname --image=nexus3provaterepo:port/image
Before we create secret entries via command $ Kubectl create secret with same inform of user ID and password, like docker login -u userid -p passwd
Here my problem is image pull is failing from nexus3 docker host.
Please suggest me how to verify login via kubernetes command and resolve this pull image issue.
Looking yours suggestions, Thanks in advance
So when pulling from private repos you need to specify an imagePullSecret like such:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
# Specify the secret with your users credentials
imagePullSecrets:
- name: regcred
You would then use the kubectl apply -f functionality, I am not actually sure you can use this in the imperative cli version of running a deployment but all the doucmentation on this can be found at here