kubernetes dones't reach internal registry - kubernetes

I've deployed an docker registry inside my kubernetes:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
registry-docker-registry ClusterIP 10.43.39.81 <none> 443/TCP 162m
I'm able to pull images from my machine (service is exposed via an ingress rule):
$ docker pull registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd
...
Status: Downloaded newer image for registry-do...
When I'm trying to test it in order to deploy my image into the same kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: covid-backend
namespace: skaffold
spec:
replicas: 3
selector:
matchLabels:
app: covid-backend
template:
metadata:
labels:
app: covid-backend
spec:
containers:
- image: registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd
name: covid-backend
ports:
- containerPort: 8080
Then, I've tried to deploy it:
$ cat pod.yaml | kubectl apply -f -
However, kubernetes isn't able to reach registry:
Extract of kubectl get events:
6s Normal Pulling pod/covid-backend-774bd78db5-89vt9 Pulling image "registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd"
1s Warning Failed pod/covid-backend-774bd78db5-89vt9 Failed to pull image "registry-docker-registry.registry/skaffold-covid-backend:c5dfd81-dirty#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": rpc error: code = Unknown desc = failed to pull and unpack image "registry-docker-registry.registry/skaffold-covid-backend#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": failed to resolve reference "registry-docker-registry.registry/skaffold-covid-backend#sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd": failed to do request: Head https://registry-docker-registry.registry/v2/skaffold-covid-backend/manifests/sha256:76312ebc62c4b3dd61b4451fe01b1ecd2e6b03a2b3146c7f25df3d3cfb4512cd: dial tcp: lookup registry-docker-registry.registry: Try again
1s Warning Failed pod/covid-backend-774bd78db5-89vt9 Error: ErrImagePull
As you can see, kubernetes is not able to get access to the internal deployed registry...
Any ideas?

I would recommend to follow docs from k3d, they are here.
More precisely this one
Using your own local registry
If you don't want k3d to manage your registry, you can start it with some docker commands, like:
docker volume create local_registry
docker container run -d --name registry.local -v local_registry:/var/lib/registry --restart always -p 5000:5000 registry:2
These commands will start you registry in registry.local:5000. In order to push to this registry, you will need to add the line at /etc/hosts as we described in the previous section . Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you must connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.local. And then you can check you local registry.
Pushing to your local registry address
The registry will be located, by default, at registry.local:5000 (customizable with the --registry-name and --registry-port parameters). All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname but also be resolved from your host.
The easiest solution for this is to add an entry in your /etc/hosts file like this:
127.0.0.1 registry.local
Once again, this will only work with k3s >= v0.10.0 (see the section below when using k3s <= v0.9.1)
Local registry volume
The local k3d registry uses a volume for storying the images. This volume will be destroyed when the k3d registry is released. In order to persist this volume and make these images survive the removal of the registry, you can specify a volume with the --registry-volume and use the --keep-registry-volume flag when deleting the cluster. This will create a volume with the given name the first time the registry is used, while successive invocations will just mount this existing volume in the k3d registry container.
Docker Hub cache
The local k3d registry can also be used for caching images from the Docker Hub. You can start the registry as a pull-through cache when the cluster is created with --enable-registry-cache. Used in conjuction with --registry-volume/--keep-registry-volume can speed up all the downloads from the Hub by keeping a persistent cache of images in your local machine.
Testing your registry
You should test that you can
push to your registry from your local development machine.
use images from that registry in Deployments in your k3d cluster.
We will verify these two things for a local registry (located at registry.local:5000) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker's documentation for this).
Firstly, we can download some image (like nginx) and push it to our local registry with:
docker pull nginx:latest
docker tag nginx:latest registry.local:5000/nginx:latest
docker push registry.local:5000/nginx:latest
Then we can deploy a pod referencing this image to your cluster:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test-registry
labels:
app: nginx-test-registry
spec:
replicas: 1
selector:
matchLabels:
app: nginx-test-registry
template:
metadata:
labels:
app: nginx-test-registry
spec:
containers:
- name: nginx-test-registry
image: registry.local:5000/nginx:latest
ports:
- containerPort: 80
EOF
Then you should check that the pod is running with kubectl get pods -l "app=nginx-test-registry".
Additionaly there are 2 github links worth visting
K3d not able resolve dns
You could try to use an answer provided by #rjshrjndrn, might solve your issue with dns.
docker images are not pulled from docker repository behind corporate proxy
Open github issue on k3d with same problem as yours.

Related

Unable to access minikube IP address

I am an absolute beginner to Kubernetes, and I was following this tutorial to get started. I have managed writing the yaml files. However once I deploy it, I am not able to access the web app.
This is my webapp yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
ports:
- containerPort: 3000
env:
- name: USER_NAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: USER_PWD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
apiVersion: v1
kind: Service
metadata:
name: webapp-servicel
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30200
When I run the command : kubectl get node
When I run the command: kubectl get pods, i can see the pods running
kubectl get svc
I then checked the logs for webapp, I dont see any errors
I then checked the details logs by running the command: kubectl describe pod podname
I dont see any obvious errors in the result above, but again I am not experienced enough to check if there is any config thats not set properly.
Other things I have done as troubleshooting
Ran the following command for the minikube to open up the app : minikube service webapp-servicel, it opens up the web page, but again does not connect to the IP.
Uninstalled minikube, kubectl and all relevant folders, and run everything again.
pinged the ip address directly from command line, and cannot reach.
I would appreciate if someone can help me fix this.
Try these 3 options
can you do the kubectl get node -o wide and get the ip address of node and then open in web browser NODE_IP_ADDRESS:30200
Alternative you can run this command minikube service <SERVICE_NAME> --url which will give you direct url to access application and access the url in web browser.
kubectl port-forward svc/<SERVICE_NAME> 3000:3000
and access application on localhost:3000
Ran the following command for the minikube to open up the app : minikube service webapp-servicel, it opens up the web page, but again does not connect to the IP.
Uninstalled minikube, kubectl and .kube and run everything again.
pinged the ip address directly from command line, and cannot reach.
I suggest you to try port forwarding
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
kubectl port-forward svc/x-service NodePort:Port
I got stuck here as well. After looking through some of the gitlab issues, I found a helpful tip about the minikube driver. The instructions for starting minikub are incorrect in the video if you used
minikube start -driver docker
Here's how to fix your problem.
stop minikube
minikube stop
delete minikube (this deletes your cluster)
minikube delete
start up minikube again, but this time specify the hyperkit driver
minikube start --vm-driver=hyperkit
check status
minikube status
reapply your components in this order by.
kubectl apply -f mongo-config.yaml
kubectl apply -f mongo-secret.yaml
kubectl apply -f mongo.yaml
kubectl aplly -f webapp.yaml
get your ip
minikube ip
open a browser, go to ip address:30200 (or whatever the port you defined was, mine was 30100). You should see an image of a dog and a form.
Some information in this SO post is useful too.
On Windows 11 with Ubuntu 20.04 WSL, it worked for me by using:
minikube start --driver=hyperv
On Windows 10 with Docker-Desktop one can even do not need to use minikube. Just enable Kubernetes in Docker-Desktop settings and use kubectl. Check the link for further information.
Using Kubernetes of Docker-Desktop I could simply reach webapp with localhost:30100. In my case, for some reason I had to pull mongo docker image manually with docker pull mongo:5.0.

How do I expose a registry hosted in Minikube?

I have started a Kubernetes cluster with Minikube. I used a simple deployment file to create a deployment that runs the Registry container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: registry
labels:
app: registry
spec:
replicas: 3
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:latest
ports:
- containerPort: 80
After this, I expose the deployment using a service:
$ kubectl expose deployment/registry --type="LoadBalancer" --port 5000
$ minikube service registry
This exposes my registry to the host machine. I can navigate to http://172.17.174.88:31826/v2/_catalog in my browser and see there's no repositories yet. I have an image running an ASP.Net WebApi project called WeatherApp on my host machine's docker. I run these commands:
$ docker tag 0a259f7ce186 172.17.174.88:31412/weatherapp
$ docker push 172.17.174.88:31412/weatherapp
This causes the following error:
The push refers to repository [172.17.174.88:31412/weatherapp] Get
https://172.17.174.88:31412/v2/: dial tcp 172.17.174.88:31412:
connect: no route to host
I think the problem is that my docker client is trying to connect to the registry over HTTPS, which will not work. How can I force my docker client to use HTTP to push the image to my registry?
I‘m afraid that you will have no chance to fall back to just http. You are forced to use https. You need to configure insecure registries in your docker client.
This might help: https://docs.docker.com/registry/insecure/

How to edit code in kubernetes pod containers using VS Code?

Typically, if I have a remote server, I could access it using ssh, and VS Code gives a beautiful extension for editing and debugging codes for the remote server. But when I create pods in Kuberneters, I can't really ssh into the container and so I cannot edit the code inside the pod or machine. And the kuberneters plugin in VSCode does not really help because the plugin is used to deploy the code. So, I was wondering whether there is a way edit codes inside a pod using VSCode.
P.S. Alternatively if there is a way to ssh into a pod in a kuberneters, that will do too.
If your requirement is "kubectl edit xxx" to use VSCode.
The solution is:
For Linux,macos: export EDITOR='code --wait'
For Windows: set EDITOR=code --wait
Kubernetes + Remote Development extensions now allow:
attaching to k8s pods
open remote folders
execute remotely
debug on remote
integrated terminal into remote
must have:
kubectl
docker (minimum = docker cli - Is it possible to install only the docker cli and not the daemon)
required vscode extentions:
Kubernetes. https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools
Remote Development - https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack
Well a pod is just a unit of deployment in kubernetes which means you can tune the containers inside it to receive an ssh connection.
Let's start by getting a docker image that allows ssh connections. rastasheep/ubuntu-sshd:18.04 image is quite nice for this. Create a deployment with it.
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: debugger
name: debugger
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: debugger
template:
metadata:
creationTimestamp: null
labels:
app: debugger
spec:
containers:
- name: debugger
image: rastasheep/ubuntu-sshd:18.04
imagePullPolicy: "Always"
hostname: debugger
restartPolicy: Always
Now let's create a service of type LoadBalancer such that we can access the pod remotely.
---
apiVersion: v1
kind: Service
metadata:
namespace: default
labels:
app: debugger
name: debugger
spec:
type: LoadBalancer
ports:
- name: "22"
port: 22
targetPort: 22
selector:
app: debugger
status:
loadBalancer: {}
Finally, get the external ip address by running kubectl get svc | grep debugger and use it to test the ssh connection ssh root#external_ip_address
Note the user / pass of this docker image is root / root respectively.
UPDATE
Nodeport example. I tested this and it worked running ssh -p30036#ipBUT I had to enable a firewall rule to make it work. So, the nmap command that I gave you has the answer. Obviously the machines that run kubernetes don't allow inbound traffic on weird ports. Talk to them such that they can give you an external ip address or at least a port in a node.
---
apiVersion: v1
kind: Service
metadata:
name: debugger
namespace: default
labels:
app: debugger
spec:
type: NodePort
ports:
- name: "ssh"
port: 22
nodePort: 30036
selector:
app: debugger
status:
loadBalancer: {}
As mentioned in some of the other answers, you can do this although it is fraught with danger as the cluster can/will replace pods regularly and when it does, it starts a new pod idempotently from the configuration which will not have your changes.
The command below will get you a shell session in your pod , which can sometimes be helpful for debugging if you don't have adequate monitoring/local testing facilities to recreate an issue.
kubectl --namespace=exmaple exec -it my-cool-pod-here -- /bin/bash
Note You can replace the command with any tool that is installed in your container (python3, sh, bash, etc). Also know that that some base images like alpine wont have bash/shell installed be default.
This will open a bash session in the running container on the cluster, assuming you have the correct k8s RBAC permissions.
There is an Cloud Code extension available for VS Code that will serve your purpose.
You can install it in your Visual Studio Code to interact with your Kubernetes cluster.
It allows you to create minikube cluster, Google GKE, Amazon EKS or Azure AKS and manage it from VS Code (you can access cluster information, stream/view logs from pods and open interactive terminal to the container).
You can also enable continuous deployment so it will continuously watch for changes in your files, rebuild the container and redeploy application to the cluster.
It is well explained in Documentation
Hope it will be useful for your use case.

Kubernetes image pull from nexus3 docker host is failed

We have created a nexus3 docker host private registry on CentOS machine and same ip details updated on daemon.json under docker folder.
Docker pull and push is working fine.
Same image while trying to kubernetes deploy is failing with image pull state.
$ Kubectl run deployname --image=nexus3provaterepo:port/image
Before we create secret entries via command $ Kubectl create secret with same inform of user ID and password, like docker login -u userid -p passwd
Here my problem is image pull is failing from nexus3 docker host.
Please suggest me how to verify login via kubernetes command and resolve this pull image issue.
Looking yours suggestions, Thanks in advance
So when pulling from private repos you need to specify an imagePullSecret like such:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
# Specify the secret with your users credentials
imagePullSecrets:
- name: regcred
You would then use the kubectl apply -f functionality, I am not actually sure you can use this in the imperative cli version of running a deployment but all the doucmentation on this can be found at here

How to add private DNS zone to kube-dns in GKE

I have created a k8 cluster in GKE.
I have a docker registry created in Artifactory, this artifactory is hosted on AWS. I have a route53 entry for docker-repo.aws.abc.com in aws.abc.com Hosted zone in AWS
Now, I need to configure my cluster so that the docker images are pulled from artifactory.
I went through documentation and understand I will have to add stubDomain in my kube-dns configmaps.
kubectl edit cm kube-dns -n kube-system
apiVersion: v1
data:
stubDomains: |
{"aws.abc.com" : ["XX.XX.XX.XX"]}
kind: ConfigMap
metadata:
creationTimestamp: 2019-05-21T14:30:15Z
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: kube-dns
namespace: kube-system
resourceVersion: "7669"
selfLink: /api/v1/namespaces/kube-system/configmaps/kube-dns
uid: f378aa5f-7bd4-11e9-9df2-42010aa93d03
However, still docker pull command fails.
docker pull docker-repo.aws.abc.com/abc-sampleapp-java/abc-service:V-57bc9c9-201
Error response from daemon: Get https://docker-repo.aws.abc.com/v2/: dial tcp: lookup docker-dev-repo.aws.abc.com on 169.254.169.254:53: no such host
Note: When I make an entry in /etc/hosts file on worker nodes, docker pull works fine.
Pulling an image from registry on pod start uses different DNS settings than when we call DNS from pods inside a cluster.
When Kubernetes starts new pod, it schedules it to the node and then agent on the node named kubelet calls container engine (Docker) to download an image and run it with designed configuration.
Docker uses system DNS to resolve the address of a registry, because it works right on our host system, not in the Kubernetes, that is why any DNS settings will not affect DNS resolving on the image downloading stage. https://github.com/kubernetes/kubernetes/issues/8735 is a discussion about it on Github.
If we want to change DNS settings and override the registry IP to use it on image downloading stage, we should set it in the host system. In the configuration, we need to modify DNS settings on all your nodes in the cluster. The simplest way to do it is using /etc/hosts file and adding a record with your custom IP, e.g. 192.168.1.124 example.com.
After that modifications, Docker on nodes will use the record from /etc/hosts for your registry instead of global DNS records, because that file has higher priority and you will be able to run pods with your image.
To update the host file.
you can use a DeamonSet with Security Context as privileged see below:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: hosts-fix-script
namespace: kube-system
labels:
app: hosts-fix-script
spec:
selector:
matchLabels:
name: hosts-fix
template:
metadata:
labels:
name: hosts-fix
spec:
hostPID: true
containers:
- name: hosts-fix-script
image: gcr.io/google-containers/startup-script:v1
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#!/bin/bash
echo "10.0.0.11 onprem.registry" >> /etc/hosts
echo 'Done'
you need to run the kubectl apply -f