Kubernetes creating rc with local image - kubernetes

Im trying to create a replication Controller based on an image that I created locally. But when I try to create the rc it gives error ImagePullBackOff. I have created a cluster locally using minikube
Here is my .yaml file:
apiVersion: v1
kind: ReplicationController
metadata:
name: example
spec:
replicas: 1
selector:
app: ayonAppserver
template:
metadata:
name: example.com
labels:
app: ayonAppserver
spec:
containers:
- name: something
image: nktest:10
resources:
limits:
cpu: 500m
memory: 1024Mi
Command that I run to create the rc:
kubectl create -f <file>
When Im running docker images I see the image in the list
REPOSITORY TAG IMAGE ID CREATED SIZE
nktest 10 e60b3c9c3bc6 10 hours ago 425 MB
when I run kubectl get pods
NAME READY STATUS RESTARTS AGE
example-gr9v2 0/1 ImagePullBackOff 0 2m
I have tried to run the docker image locally, and it runs fine
docker run -d --name="testAyonApp1" nktest:10
Can anyone help to solve this?

So thanks to #BMW for helping me with the issue. The problem was that I was thinking since I created the cluster using minikube (locally) every image that I create in my local machine will be visible to minikube cluster. But an image is visible only when its present inside the node. Thats why every time I wanted to build it, it was looking for downloading the image.
I have now created a dockerhub account and pushed the image in the hub. And now things are working just fine.

Related

Unable to upload a file through a deployment yaml in kubernetes

I am unable to upload a file through a deployment YAML in Kubernetes.
The deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: openjdk:14
ports:
- containerPort: 8080
volumeMounts:
- name: testing
mountPath: "/usr/src/myapp/docker.jar"
workingDir: "/usr/src/myapp"
command: ["java"]
args: ["-jar", "docker.jar"]
volumes:
- hostPath:
path: "C:\\Users\\user\\Desktop\\kubernetes\\docker.jar"
type: File
name: testing
I get the following error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19s default-scheduler Successfully assigned default/test-64fb7fbc75-mhnnj to minikube
Normal Pulled 13s (x3 over 15s) kubelet Container image "openjdk:14" already present on machine
Warning Failed 12s (x3 over 14s) kubelet Error: Error response from daemon: invalid mode: /usr/src/myapp/docker.jar
When I remove the volumeMount it runs with the error unable to access docker.jar.
volumeMounts:
- name: testing
mountPath: "/usr/src/myapp/docker.jar"
This is a community wiki asnwer. Feel free to expand it.
That is a known issue with Docker on Windows. Right now it is not possible to correctly mount Windows directories as volumes.
You could try some of the workarounds mentioned by #CodeWizard in this github thread like here or here.
Also, if you are using VirtualBox, you might want to check this solution:
On Windows, you can not directly map Windows directory to your
container. Because your containers are reside inside a VirtualBox VM.
So your docker -v command actually maps the directory between the VM
and the container.
So you have to do it in two steps:
Map a Windows directory to the VM through VirtualBox manager Map a
directory in your container to the directory in your VM You better use
the Kitematic UI to help you. It is much eaiser.
Alternatively, you can deploy your setup on Linux environment to completely omit those specific kind of issues.

How to automatically restart pods when a new image ready

I'm using K8s on GCP.
Here is my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: simpleapp-direct
labels:
app: simpleapp-direct
role: backend
stage: test
spec:
replicas: 1
selector:
matchLabels:
app: simpleapp-direct
version: v0.0.1
template:
metadata:
labels:
app: simpleapp-direct
version: v0.0.1
spec:
containers:
- name: simpleapp-direct
image: gcr.io/applive/simpleapp-direct:latest
imagePullPolicy: Always
I first apply the deployment file with kubectl apply command
kubectl apply -f deployment.yaml
The pods were created properly.
I was expecting that every time I would push a new image with the tag latest, the pods would be automatically killed and restart using the new images.
I tried the rollout command
kubectl rollout restart deploy simpleapp-direct
The pods restart as I wanted.
However, I don't want to run this command every time there is a new latest build.
How can I handle this situation ?.
Thanks a lot
Try to use image hash instead of tag in your Pod Definition.
Generally: there is no way to automatically restart pods when the new image is ready. It is generally advisable not to use image:latest (or just image_name) in Kubernetes as it can cause difficulties with rollback of your deployment. Also you need to make sure that the flag: imagePullPolicy is set to Always. Normally when you use CI/CD or git-ops your deployment is updated automatically by these tools when the new image is ready and passed thru the tests.
When your Docker image is updated, you need to setup a trigger on this update within your CI/CD pipilne to re-run the deployment. Not sure about the base system/image where you build your docker image, but you can add there kubernetes certificates and run the above commands like you do on your local computer.

How to resolve this issue ErrImageNeverPull pod creation status kubernetes

I am creating a pod from an image which resides on the master node. When I create a pod on the master node to be scheduled on the worker node, I get the status of pod ErrImageNeverPull
kind: Pod
metadata:
name: cloud-pipe
labels:
app: cloud-pipe
spec:
containers:
- name: cloud-pipe
image: cloud-pipeline:latest
command: ["sleep"]
args: ["infinity"]
Kubectl describe pod details:
Type Reason Age From Message
- --- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned
default/cloud-pipe to knode
Warning ErrImageNeverPull 5m54s (x49 over 16m) kubelet Container image "cloud-
pipeline:latest" is not present with pull policy of Never
Warning Failed 51s (x72 over 16m) kubelet Error: ErrImageNeverPull
How to resolve this issue. Also, my question is does Kubernetes by default looks on the worker node for the image to exist?. Thanks
When kubernetes creates containers, it first looks to local images, and then will try registry(docker registry by default)
You are getting this error because:
your image cant be found localy on your node.
you specified imagePullPolicy: Never, so you will never try to download image from registry
You have few ways of resolving this, but all of them generally instruct you to get image locally and tag it properly.
To get image on your node you can:
copy images from one node to another
build image from existing Dockerfile
Once you get image, tag it and specify in the deployment
docker tag cloud-pipeline:latest mytest:mytest
kind: Pod
metadata:
name: cloud-pipe
labels:
app: cloud-pipe
spec:
containers:
- name: cloud-pipe
image: mytest:mytest
imagePullPolicy: Never
command: ["sleep"]
args: ["infinity"]
Or you can configure own local registry, push tagged image into it, and use imagePullPolicy: IfNotPresent. More information in #dryairship answer
Also please be sure using eval $(minikube docker-env) for imagePullPolicy: Never images, in case you are using minikube (you havent specified any tag, but it can be helpful). More information in Getting “ErrImageNeverPull” in pods question

kubernetes imagePullPolicy:Always is not pulling image automatically

I want that every time I create a new image with the tag latest Kubernetes automatically pull the new image. I added imagePullPolicy: Always in pod spec but it doesn't update the old image with new image.
apiVersion: apps/v1
kind: Deployment
metadata:
name: node
namespace: dev
labels:
app: my-node-app
spec:
replicas: 2
selector:
matchLabels:
app: my-node-app
template:
metadata:
labels:
app: my-node-app
spec:
hostNetwork: true
securityContext:
fsGroup: 1000
containers:
- name: node
imagePullPolicy: Always
image: gcr.io/my-repo/my-node-app:latest
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: my-configmap
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 2
memory: 8Gi
restartPolicy: Always
imagePullPolicy is only taken into account by Kubernetes when a POD is created or re-started. It is NOT taken into account while a POD is running, which means it does NOT check for image updates at any time while a POD is running.
Even if another POD with the same image would be scheduled onto the same Kubernetes node, the already running POD is not affected, even though Kubernetes does a pull and then uses the new image for the new POD.
If you want the desired functionality, you will have to implement your own solution. You could do this by implementing a sidecar that regularly checks the Docker Repository for changes to the given tag. When it detects such a change, it could trigger a restart of the POD, which would then force the image to be re-pulled.
A restart of the POD can either be triggered by simply exiting the sidecar or by utilizing the Kubernetes API inside the sidecar. The latter solution however gets more complicated as you will also need service accounts and RBAC rules to get proper permissions inside the sidecar container. It also has security implications you'd have to give the whole POD escalated permissions.
Setting imagePullPolicy: Always does not mean an image will be pulled automatically without any trigger.
I would recommend to use tagged image with semvar. Since you are using deployment you can perform rolling update of the pod which will pull new image and rollout the change across all the replica pod one by one in a graceful way without causing any downtime.
Let's say initially the image is gcr.io/my-repo/my-node-app:v1 and you want to update it to v2
kubectl set image deployment/node nginx=gcr.io/my-repo/my-node-app:v2 --record
Check the rollout history
kubectl rollout history deployment.v1.apps/node
In case of any issue rollback to previous version
kubectl rollout undo deployment.v1.apps/node
Also if you want to be more advanced you could do GitOps using FluxCD which supports triggering a rollout automatically whenever a new version of an image is pushed to a container registry.
Kubernetes will pull image only upon Pod creation which means it does not check for image updates while a POD is in running state.
I would recommend to use Semantic Versioning for the image tag and use a CI/CD pipeline which build, tag, and push to your registry. Then use a CD tool such as keel to re-create your pods in the last step of the pipeline.

Private repository passing through kubernetes yaml file

We have tried to setup hivemq manifest file. We have hivemq docker image in our private repository
Step1: I have logged into the private repository like
docker login "private repo name"
It was success
After that I have tried to create manifest file for that like below
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hivemq
spec:
replicas: 1
template:
metadata:
labels:
name: hivemq1
spec:
containers:
- env:
xxxxx some envronment values I have passed
name: hivemq
image: privatereponame:portnumber/directoryname/hivemq:
ports:
- containerPort: 1883
Its successfully creating, but I am getting the below issues. Could you please help any one to solve this issue.
hivemq-4236597916-mkxr4 0/1 ImagePullBackOff 0 1h
Logs:
Error from server (BadRequest): container "hivemq16" in pod "hivemq16-1341290525-qtkhb" is waiting to start: InvalidImageName
Some times I am getting that kind of issues
Error from server (BadRequest): container "hivemq" in pod "hivemq-4236597916-mkxr4" is waiting to start: trying and failing to pull image
In order to use a private docker registry with Kubernetes it's not enough to docker login.
You need to add a Kubernetes docker-registry Secret with your credentials as described here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/. Also in that article is imagePullSecrets setting you have to add to your yaml deployment file, referencing that secret.
I just fixed this on my machine, kubectl v1.9.0 failed to create the secret properly. Upgrading to v1.9.1, deleting the secret, recreating it resolved the issue for me. https://github.com/kubernetes/kubernetes/issues/57427