How to resolve this issue ErrImageNeverPull pod creation status kubernetes - kubernetes

I am creating a pod from an image which resides on the master node. When I create a pod on the master node to be scheduled on the worker node, I get the status of pod ErrImageNeverPull
kind: Pod
metadata:
name: cloud-pipe
labels:
app: cloud-pipe
spec:
containers:
- name: cloud-pipe
image: cloud-pipeline:latest
command: ["sleep"]
args: ["infinity"]
Kubectl describe pod details:
Type Reason Age From Message
- --- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned
default/cloud-pipe to knode
Warning ErrImageNeverPull 5m54s (x49 over 16m) kubelet Container image "cloud-
pipeline:latest" is not present with pull policy of Never
Warning Failed 51s (x72 over 16m) kubelet Error: ErrImageNeverPull
How to resolve this issue. Also, my question is does Kubernetes by default looks on the worker node for the image to exist?. Thanks

When kubernetes creates containers, it first looks to local images, and then will try registry(docker registry by default)
You are getting this error because:
your image cant be found localy on your node.
you specified imagePullPolicy: Never, so you will never try to download image from registry
You have few ways of resolving this, but all of them generally instruct you to get image locally and tag it properly.
To get image on your node you can:
copy images from one node to another
build image from existing Dockerfile
Once you get image, tag it and specify in the deployment
docker tag cloud-pipeline:latest mytest:mytest
kind: Pod
metadata:
name: cloud-pipe
labels:
app: cloud-pipe
spec:
containers:
- name: cloud-pipe
image: mytest:mytest
imagePullPolicy: Never
command: ["sleep"]
args: ["infinity"]
Or you can configure own local registry, push tagged image into it, and use imagePullPolicy: IfNotPresent. More information in #dryairship answer
Also please be sure using eval $(minikube docker-env) for imagePullPolicy: Never images, in case you are using minikube (you havent specified any tag, but it can be helpful). More information in Getting “ErrImageNeverPull” in pods question

Related

Need help referencing image correctly in minikube private registry

I'm trying to deploy a custom pod on minikube and I'm getting the following message regardless of my twicks:
Failed to load logs: container "my-pod" in pod "my-pod-766c646c85-nbv4c" is waiting to start: image can't be pulled
Reason: BadRequest (400)
I did all sorts of experiments based on https://minikube.sigs.k8s.io/docs/handbook/pushing/ and https://number1.co.za/minikube-deploy-a-container-using-a-private-image-registry/ without success.
I ended up trying to use minikube image load myimage:latest and reference it in the container spec as:
...
containers:
- name: my-pod
image: myimage:latest
ports:
- name: my-pod
containerPort: 8080
protocol: TCP
...
Should/can I use minikube image?
If so, should I use the full image name docker.io/library/myimage:latest or just the image suffix myimage:latest?
Is there anything else I need to do to make minikube locate the image?
Is there a way to get the logs of the bad request itself to see what is going on (I don't see anything in the api server logs)?
I also see the following error in the minikube system:
Failed to load logs: container "registry-creds" in pod "registry-creds-6b884645cf-gkgph" is waiting to start: ContainerCreating
Reason: BadRequest (400)
Thanks!
Amos
You should set the imagePullPolicy to IfNotPresent. Changing that will tell kubernetes to not pull the image if it does not need to.
...
containers:
- name: my-pod
image: myimage:latest
imagePullPolicy: IfNotPresent
ports:
- name: my-pod
containerPort: 8080
protocol: TCP
...
A quirk of kubernetes is that if you specify an image with the latest tag as you have here, it will default to using imagePullPolicy=Always, which is why you are seeing this error.
More on how kubernetes decides the default image pull policy
If you need your image to always be pulled in production, consider using helm to template your kubernetes yaml configuration.

Unable to upload a file through a deployment yaml in kubernetes

I am unable to upload a file through a deployment YAML in Kubernetes.
The deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: openjdk:14
ports:
- containerPort: 8080
volumeMounts:
- name: testing
mountPath: "/usr/src/myapp/docker.jar"
workingDir: "/usr/src/myapp"
command: ["java"]
args: ["-jar", "docker.jar"]
volumes:
- hostPath:
path: "C:\\Users\\user\\Desktop\\kubernetes\\docker.jar"
type: File
name: testing
I get the following error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19s default-scheduler Successfully assigned default/test-64fb7fbc75-mhnnj to minikube
Normal Pulled 13s (x3 over 15s) kubelet Container image "openjdk:14" already present on machine
Warning Failed 12s (x3 over 14s) kubelet Error: Error response from daemon: invalid mode: /usr/src/myapp/docker.jar
When I remove the volumeMount it runs with the error unable to access docker.jar.
volumeMounts:
- name: testing
mountPath: "/usr/src/myapp/docker.jar"
This is a community wiki asnwer. Feel free to expand it.
That is a known issue with Docker on Windows. Right now it is not possible to correctly mount Windows directories as volumes.
You could try some of the workarounds mentioned by #CodeWizard in this github thread like here or here.
Also, if you are using VirtualBox, you might want to check this solution:
On Windows, you can not directly map Windows directory to your
container. Because your containers are reside inside a VirtualBox VM.
So your docker -v command actually maps the directory between the VM
and the container.
So you have to do it in two steps:
Map a Windows directory to the VM through VirtualBox manager Map a
directory in your container to the directory in your VM You better use
the Kitematic UI to help you. It is much eaiser.
Alternatively, you can deploy your setup on Linux environment to completely omit those specific kind of issues.

kubernetes/minikube / Warning Back-off restarting failed container

I am using kubernetes/minikube and I've tried to install my application docker image by using a POD configuration yaml file as follow:
So far so good but problem starts just when I try to build my pod by executing:
kubectl create -f employee-service-pod.yaml -n dev-samples
apiVersion: v1
kind: Pod
metadata:
name: employee-service-pod
spec:
containers:
- name: employee-service-cont
image: doviche/employee-service:latest
imagePullPolicy: IfNotPresent
command: [ "echo", "SUCCESS" ]
restartPolicy: Always
imagePullSecrets:
- name: employee-service-secret
status: {}enter image description here
When I checkout my POD does show the following error:
Back-off restarting failed container
Soon after I ran:
kubectl describe pod employee-service-pod -n dev-samples
which shows what is on the image attached to this post:
Honestly I have not been able to identify what's causing the Warning and that's why I've decided to share it with you to see if some good eye sees the arcane.
I appreciate any help as I am stuck on this since so long
I am using minikube v1.11.0 on Linuxmint 19.1.
Many thanks in advance guys.
Your application completed its work successfully (echo SUCCESS) since you have mentioned restartPolicy of the container as Always, it is trying restart again and again and going to crashLoopbackoff state.
Please change the container restartPolicy: OnFailure, this should mark the pod status completed once the process/command ends in the container.
Thanks,

k8s - how to project service account token into pod

I am trying to project the serviceAccount token into my pod as described in this k8s doc - https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection.
I create a service account using below command
kubectl create sa acct
Then I create the pod
kind: Pod
apiVersion: v1
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /var/run/secrets/tokens
name: vault-token
serviceAccountName: acct
volumes:
- name: vault-token
projected:
sources:
- serviceAccountToken:
path: vault-token
expirationSeconds: 7200
It fails due to - MountVolume.SetUp failed for volume "vault-token" : failed to fetch token: the server could not find the requested resource
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m15s default-scheduler Successfully assigned default/nginx to minikube
Warning FailedMount 65s (x10 over 5m15s) kubelet, minikube MountVolume.SetUp failed for volume "vault-token" : failed to fetch token: the server could not find the requested resource
My minikube version: v0.33.1
kubectl version : 1.13
Question:
What am i doing wrong here?
I tried this on kubeadm, and was able to suceed.
#Aman Juneja was right, you have to add the API flags as described in the documentation.
You can do that by creating the serviceaccount and then adding this flags to the kubeapi:
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-account-issuer=api
- --service-account-signing-key-file=/etc/kubernetes/pki/apiserver.key
- --service-account-api-audiences=api
After that apply your pod.yaml and it will work. As you will see in describe pod:
Volumes:
vault-token:
Type: Projected (a volume that contains injected data from multiple sources)
[removed as not working solution]
unfortunately in my case my minikube did not want to start with this flags, it gets stuck on: waiting for pods: apiserver soon I will try to debug again.
UPDATE
Turns out you have to just pass the arguments into the minikube with directories from the inside of minikubeVM and not the outside as I did with previous example (so the .minikube directory), so it will look like this:
minikube start \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/apiserver.key \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-api-audiences=api
After that creating ServiceAccount and applying pod.yaml works.
you should use deployment since when you use deployment the token is automatically mounted into the pods.

Kubernetes creating rc with local image

Im trying to create a replication Controller based on an image that I created locally. But when I try to create the rc it gives error ImagePullBackOff. I have created a cluster locally using minikube
Here is my .yaml file:
apiVersion: v1
kind: ReplicationController
metadata:
name: example
spec:
replicas: 1
selector:
app: ayonAppserver
template:
metadata:
name: example.com
labels:
app: ayonAppserver
spec:
containers:
- name: something
image: nktest:10
resources:
limits:
cpu: 500m
memory: 1024Mi
Command that I run to create the rc:
kubectl create -f <file>
When Im running docker images I see the image in the list
REPOSITORY TAG IMAGE ID CREATED SIZE
nktest 10 e60b3c9c3bc6 10 hours ago 425 MB
when I run kubectl get pods
NAME READY STATUS RESTARTS AGE
example-gr9v2 0/1 ImagePullBackOff 0 2m
I have tried to run the docker image locally, and it runs fine
docker run -d --name="testAyonApp1" nktest:10
Can anyone help to solve this?
So thanks to #BMW for helping me with the issue. The problem was that I was thinking since I created the cluster using minikube (locally) every image that I create in my local machine will be visible to minikube cluster. But an image is visible only when its present inside the node. Thats why every time I wanted to build it, it was looking for downloading the image.
I have now created a dockerhub account and pushed the image in the hub. And now things are working just fine.