Error response from daemon: Get https://armdocker.rnd.se/v1/_ping: Not Found - kubernetes

I am using Kubernetes version 1.10.
I am trying to pull an image from a local docker repo. I already have the correct secret created.
[root#node1 ~]# kubectl get secret
NAME TYPE DATA AGE
arm-docker kubernetes.io/dockerconfigjson 1 10m
Checking the secret in detail gives me the correct auth token
[root#node1 ~]# kubectl get secret arm-docker --output="jsonpath={.data.\.dockerconfigjson}" | base64 -d
{"auths":{"armdocker.rnd.se":{"username":"<MY-USERNAME>","password":"<MY-PASSWORD>","email":"<MY-EMAIL>","auth":"<CORRECT_AUTH_TOKEN>"}}}
But when I create a Pod, Im getting the following ERROR :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13s default-scheduler Successfully assigned authorization-backend-deployment-8fd5fc8d4-msxvd to node6
Normal SuccessfulMountVolume 13s kubelet, node6 MountVolume.SetUp succeeded for volume "default-token-w7vlf"
Normal BackOff 4s (x4 over 10s) kubelet, node6 Back-off pulling image "armdocker.rnd.se/proj/authorization_backend:3.6.15"
Warning Failed 4s (x4 over 10s) kubelet, node6 Error: ImagePullBackOff
Normal Pulling 1s (x2 over 12s) kubelet, node6 pulling image "armdocker.rnd.se/proj/authorization_backend:3.6.15"
Warning Failed 1s (x2 over 12s) kubelet, node6 Failed to pull image "armdocker.rnd.se/proj/authorization_backend:3.6.15": rpc error: code = Unknown desc = Error response from daemon: Get https://armdocker.rnd.se/v1/_ping: Not Found
Warning Failed 1s (x2 over 12s) kubelet, node6 Error: ErrImagePull
Why is it looking for /v1/_ping ? Can I disable this somehow ?
Im unable to understand what is the problem here.

Once defined your secret, you need to use it inside your pod (you didn't whether you used it).
kind: Pod
...
spec:
imagePullSecrets:
- name: arm-docker

Related

Centos 8 microk8s Readiness probe failed: HTTP probe failed with statuscode: 503

I have installed microk8s on my centos 8 operating system.
kube-system coredns-7f9c69c78c-lxm7c 0/1 Running 1 18m
kube-system calico-node-thhp8 1/1 Running 1 68m
kube-system calico-kube-controllers-f7868dd95-dpsnl 0/1 CrashLoopBackOff 23 68m
When I do microk8s enable dns, coredns or calico-kube-controllers cannot be started as above.
Describe the pod for coredns :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned kube-system/coredns-7f9c69c78c-lxm7c to localhost.localdomain
Normal Pulled 14m kubelet Container image "coredns/coredns:1.8.0" already present on machine
Normal Created 14m kubelet Created container coredns
Normal Started 14m kubelet Started container coredns
Warning Unhealthy 11m (x22 over 14m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503
Normal SandboxChanged 2m8s kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 2m7s kubelet Container image "coredns/coredns:1.8.0" already present on machine
Normal Created 2m7s kubelet Created container coredns
Normal Started 2m6s kubelet Started container coredns
Warning Unhealthy 2m6s kubelet Readiness probe failed: Get "http://10.1.102.132:8181/ready": dial tcp 10.1.102.132:8181: connect: connection refused
Warning Unhealthy 9s (x12 over 119s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503
Describe the pod for calico-kube-controllers :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 73m default-scheduler no nodes available to schedule pods
Warning FailedScheduling 73m (x1 over 73m) default-scheduler no nodes available to schedule pods
Warning FailedScheduling 72m (x1 over 72m) default-scheduler 0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Normal Scheduled 72m default-scheduler Successfully assigned kube-system/calico-kube-controllers-f7868dd95-dpsnl to localhost.localdomain
Warning FailedCreatePodSandBox 72m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f3ea36b003b0c9142ae63fee31531f9102e40ab837f4d795d1efb5c85af223ec": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Warning FailedCreatePodSandBox 71m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a1c405cdcebe79c586badcc8da47700247751a50ef9a1403e95fc4995485fba0": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Warning FailedCreatePodSandBox 71m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4adb07610eef0d7a618105abf72a114e486c373a02d5d1b204da2bd35268dd1b": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Warning FailedCreatePodSandBox 71m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "96aac009175973ac4c20034824db3443b3ab184cfcd1ed23786e539fb6147796": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Warning FailedCreatePodSandBox 71m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "79639a18edcffddbdb93492157af43bb6c1f1a9ac2af1b3fbbac58335737d5dc": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Warning FailedCreatePodSandBox 70m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3264f006447297583a37d8cc87ffe01311deaf2a31bf25867b3b18c83db2167d": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Warning FailedCreatePodSandBox 70m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5c5cf6509bfcf515ad12bc51451e4c385e5242c4f7bb593779d207abf9c906a4": error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
Normal Pulling 70m kubelet Pulling image "calico/kube-controllers:v3.13.2"
Normal Pulled 69m kubelet Successfully pulled image "calico/kube-controllers:v3.13.2" in 50.744281789s
Normal Created 69m kubelet Created container calico-kube-controllers
Normal Started 69m kubelet Started container calico-kube-controllers
Warning Unhealthy 69m (x2 over 69m) kubelet Readiness probe failed: Failed to read status file status.json: open status.json: no such file or directory
Warning MissingClusterDNS 37m (x185 over 72m) kubelet pod: "calico-kube-controllers-f7868dd95-dpsnl_kube-system(d8c3ee40-7d3b-4a84-9398-19ec8a6d9082)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
Warning Unhealthy 31m (x6 over 32m) kubelet Readiness probe failed: Failed to read status file status.json: open status.json: no such file or directory
Normal Pulled 30m (x4 over 32m) kubelet Container image "calico/kube-controllers:v3.13.2" already present on machine
Normal Created 30m (x4 over 32m) kubelet Created container calico-kube-controllers
Normal Started 30m (x4 over 32m) kubelet Started container calico-kube-controllers
Warning BackOff 22m (x42 over 32m) kubelet Back-off restarting failed container
Normal SandboxChanged 10m kubelet Pod sandbox changed, it will be killed and re-created.
Warning Unhealthy 9m36s (x6 over 10m) kubelet Readiness probe failed: Failed to read status file status.json: open status.json: no such file or directory
Normal Pulled 8m51s (x4 over 10m) kubelet Container image "calico/kube-controllers:v3.13.2" already present on machine
Normal Created 8m51s (x4 over 10m) kubelet Created container calico-kube-controllers
Normal Started 8m51s (x4 over 10m) kubelet Started container calico-kube-controllers
Warning BackOff 42s (x42 over 10m) kubelet Back-off restarting failed container
I cannot start my microk8s services. I don't encounter these on my Ubuntu server. What can I do in these error situations that I encounter for my Centos 8 server?
Have you tried updating the microk8s version?

What can cause a ErrImagePull in kubernetes? The registry returns 403

Hey I'm trying to get a pipeline to work with kubernetes but I keep getting ErrImagePull
Earlier I was getting something along the lines authentication failed.
I created a secret in the namespace of the pod and referring to it in the deployment file:
imagePullSecrets:
- name: "registry-secret"
I still get ErrImagePull but now for different reasons. When describing the failed pod I get:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m46s default-scheduler Successfully assigned <project> to <server>
Normal Pulling 3m12s (x4 over 4m45s) kubelet Pulling image "<container_url>"
Warning Failed 3m12s (x4 over 4m45s) kubelet Failed to pull image "<container_url>": rpc error: code = Unknown desc = Requesting bear token: invalid status code from registry 403 (Forbidden)
Warning Failed 3m12s (x4 over 4m45s) kubelet Error: ErrImagePull
Warning Failed 3m (x6 over 4m45s) kubelet Error: ImagePullBackOff
Normal BackOff 2m46s (x7 over 4m45s) kubelet Back-off pulling image "<container_url>"
I guess the Registry is returning 403, but why? Does it mean the user in registry-secret is not allowed to pull the image?
OP has posted in the comment that the problem is resolved:
I found the error. So I had a typo and my secret was in fact not created in the correct namespace.

Back-off pulling image metrics in kubernetes

i installed the k8s dashboard follows as github dashboard address.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
and output of
kubectl get pods --all-namespaces
kubernetes-dashboard dashboard-metrics-scraper-79c5968bdc-64wl9 0/1 ImagePullBackOff 0 48m
kubernetes-dashboard kubernetes-dashboard-9f9799597-w9cp9 1/1 Running 0 48m
and output of kubectl describe pod is
Normal Scheduled 41m default-scheduler Successfully assigned kubernetes-dashboard/dashboard-metrics-scraper-79c5968bdc-64wl9 to lab-vm
Normal Pulling 30m (x4 over 41m) kubelet Pulling image "kubernetesui/metrics-scraper:v1.0.6"
Warning Failed 27m (x7 over 37m) kubelet Error: ImagePullBackOff
Warning Failed 24m (x5 over 37m) kubelet Failed to pull image "kubernetesui/metrics-scraper:v1.0.6": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/kubernetesui/metrics-scraper:v1.0.6": failed to resolve reference "docker.io/kubernetesui/metrics-scraper:v1.0.6": unexpected status code [manifests v1.0.6]: 408 Request Time-out
Warning Failed 10m (x7 over 37m) kubelet Error: ErrImagePull
Normal BackOff 81s (x73 over 37m) kubelet Back-off pulling image "kubernetesui/metrics-scraper:v1.0.6"
Please help me to fix this issue.
thanks a lot.

Pull an Image from a Private Registry fails - ImagePullBackOff

On our K8S Worker node with below command have created "secret" to pull images from our private (Nexus) registry.
kubectl create secret docker-registry regcred --docker-server=https://nexus-server/nexus/ --docker-username=admin --docker-password=password --docker-email=user#company.com
Created my-private-reg-pod.yaml in K8S Worker node, It has below.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: private-reg-container
image: nexus-server:4546/ubuntu-16:version-1
imagePullSecrets:
- name: regcred
Created pod with below command
kubectl create -f my-private-reg-pod.yaml
kubectl get pods
NAME READY STATUS RESTARTS AGE
test-pod 0/1 ImagePullBackOff 0 27m
kubectl describe pod test-pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/test-pod to k8s-worker01
Warning Failed 26m (x6 over 28m) kubelet, k8s-worker01 Error: ImagePullBackOff
Normal Pulling 26m (x4 over 28m) kubelet, k8s-worker01 Pulling image "sonatype:4546/ubuntu-16:version-1"
Warning Failed 26m (x4 over 28m) kubelet, k8s-worker01 Failed to pull image "nexus-server:4546/ubuntu-16:version-1": rpc error: code = Unknown desc = Error response from daemon: Get https://nexus-server.domain.com/nexus/v2/ubuntu-16/manifests/ver-1: no basic auth credentials
Warning Failed 26m (x4 over 28m) kubelet, k8s-worker01 Error: ErrImagePull
Normal BackOff 3m9s (x111 over 28m) kubelet, k8s-worker01 Back-off pulling image "nexus-server:4546/ubuntu-16:version-1"
On terminal nexus login works
docker login nexus-server:4546
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Something i am missing with this section?
Since my docker login to nexus succeeded on terminal, So i have deleted my secret and created with kubectl create secret generic regcred \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson it worked.

How to fix ImagePullBackOff with Kubernetespod?

I created a pod 5 hours ago.Now I have error:Pull Back Off
These are events from describe pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4h51m default-scheduler Successfully assigned default/nodehelloworld.example.com to minikube
Normal Pulling 4h49m (x4 over 4h51m) kubelet, minikube pulling image "milenkom/docker-demo"
Warning Failed 4h49m (x4 over 4h51m) kubelet, minikube Failed to pull image "milenkom/docker-demo": rpc error: code = Unknown desc = Error response from daemon: manifest for milenkom/docker-demo:latest not found
Warning Failed 4h49m (x4 over 4h51m) kubelet, minikube Error: ErrImagePull
Normal BackOff 4h49m (x6 over 4h51m) kubelet, minikube Back-off pulling image "milenkom/docker-demo"
Warning Failed 4h21m (x132 over 4h51m) kubelet, minikube Error: ImagePullBackOff
Warning FailedMount 5m13s kubelet, minikube MountVolume.SetUp failed for volume "default-token-zpl2j" : couldn't propagate object cache: timed out waiting for the condition
Normal Pulling 3m34s (x4 over 5m9s) kubelet, minikube pulling image "milenkom/docker-demo"
Warning Failed 3m32s (x4 over 5m2s) kubelet, minikube Failed to pull image "milenkom/docker-demo": rpc error: code = Unknown desc = Error response from daemon: manifest for milenkom/docker-demo:latest not found
Warning Failed 3m32s (x4 over 5m2s) kubelet, minikube Error: ErrImagePull
Normal BackOff 3m5s (x6 over 5m1s) kubelet, minikube Back-off pulling image "milenkom/docker-demo"
Warning Failed 3m5s (x6 over 5m1s) kubelet, minikube Error: ImagePullBackOff
Images on my desktop
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
milenkom/docker-demo tagname 08d27ff00255 6 hours ago 659MB
Following advices from Max and Shanica I made a mess when tagging
docker tag 08d27ff00255 docker-demo:latest
Works OK,but when I try
docker push docker-demo:latest
The push refers to repository [docker.io/library/docker-demo]
e892b52719ff: Preparing
915b38bfb374: Preparing
3f1416a1e6b9: Preparing
e1da644611ce: Preparing
d79093d63949: Preparing
87cbe568afdd: Waiting
787c930753b4: Waiting
9f17712cba0b: Waiting
223c0d04a137: Waiting
fe4c16cbf7a4: Waiting
denied: requested access to the resource is denied
although I am logged in
Output docker inspect image 08d27ff00255
[
{
"Id": "sha256:08d27ff0025581727ef548437fce875d670f9e31b373f00c2a2477f8effb9816",
"RepoTags": [
"docker-demo:latest",
"milenkom/docker-demo:tagname"
],
Why does it fail assigning pod now?
manifest for milenkom/docker-demo:latest not found
Looks like there's no latest tag in the image you want to pull: https://hub.docker.com/r/milenkom/docker-demo/tags.
Try some existing image.
UPD (based on question update):
docker push milenkom/docker-demo:tagname
update k8s pod to point to milenkom/docker-demo:tagname