Need help referencing image correctly in minikube private registry - kubernetes

I'm trying to deploy a custom pod on minikube and I'm getting the following message regardless of my twicks:
Failed to load logs: container "my-pod" in pod "my-pod-766c646c85-nbv4c" is waiting to start: image can't be pulled
Reason: BadRequest (400)
I did all sorts of experiments based on https://minikube.sigs.k8s.io/docs/handbook/pushing/ and https://number1.co.za/minikube-deploy-a-container-using-a-private-image-registry/ without success.
I ended up trying to use minikube image load myimage:latest and reference it in the container spec as:
...
containers:
- name: my-pod
image: myimage:latest
ports:
- name: my-pod
containerPort: 8080
protocol: TCP
...
Should/can I use minikube image?
If so, should I use the full image name docker.io/library/myimage:latest or just the image suffix myimage:latest?
Is there anything else I need to do to make minikube locate the image?
Is there a way to get the logs of the bad request itself to see what is going on (I don't see anything in the api server logs)?
I also see the following error in the minikube system:
Failed to load logs: container "registry-creds" in pod "registry-creds-6b884645cf-gkgph" is waiting to start: ContainerCreating
Reason: BadRequest (400)
Thanks!
Amos

You should set the imagePullPolicy to IfNotPresent. Changing that will tell kubernetes to not pull the image if it does not need to.
...
containers:
- name: my-pod
image: myimage:latest
imagePullPolicy: IfNotPresent
ports:
- name: my-pod
containerPort: 8080
protocol: TCP
...
A quirk of kubernetes is that if you specify an image with the latest tag as you have here, it will default to using imagePullPolicy=Always, which is why you are seeing this error.
More on how kubernetes decides the default image pull policy
If you need your image to always be pulled in production, consider using helm to template your kubernetes yaml configuration.

Related

Probes not found on $PATH

I am redeploying a K3s deployment from a few months ago. Then, it worked perfectly, with no problems. However, when I try deploying it now - after making some other fixes, I get the following error:
Warning Unhealthy 32m kubelet Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "8078b7c54b9bb1609451ae1c2e832ede0670f264490f6ee34e334673fd025681": OCI runtime exec failed: exec failed: unable to start container process: exec: "grpc_health_probe": executable file not found in $PATH: unknown
This is that the .yaml file I am using for the deployment.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vei-deployment
spec:
replicas: 1
selector:
matchLabels:
app: server-pod
template:
metadata:
labels:
app: server-pod
spec:
containers:
- name: server-pod
image: myname/mydeployment:latest
env:
- name: AWS_ACCESS_KEY_ID
value: $AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
value: $AWS_SECRET_ACCESS_KEY
ports:
- name: grpc
containerPort: 50051
livenessProbe:
exec:
command:
- grpcurl
- -plaintext
- localhost:50051
- ping.Pinger/Ping
readinessProbe:
exec:
command:
- grpc_health_probe
- -addr=:50051
The same error is thrown for every command in both the liveliness probe and the readiness probe. Has something changed with respect to probes in K3s over the past few months?
Warning Unhealthy 32m kubelet Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "8078b7c54b9bb1609451ae1c2e832ede0670f264490f6ee34e334673fd025681": OCI runtime exec failed: exec failed: unable to start container process: exec: "grpc_health_probe": executable file not found in $PATH: unknown
This error could be caused by a few different scenarios as addressed below :
It could be that the container image you are using is missing the grpchealthprobe executable, or it could be that the image is not correctly configured to find the executable in the $PATH. If the image is correctly configured, it could also be that the kubelet is not able to access the container image.
As #DazWilkin It looks like the issue is that the grpchealthprobe binary is not present in your Kubernetes cluster.It looks like your Kubernetes deployment is failing due to a missing executable file. This error can occur when the file needed by the readiness probe is not present in the container. To solve this, you will need to make sure that the file is present in the container. This can be done by adding the file to the container image or by mounting the file into the container.you'll need to make sure that the "grpchealthprobe" executable is in the $PATH environment variable, or specify a full path to the executable in the readiness probe configuration. Additionally, you may need to ensure that the permissions of the executable are set correctly so that it can be executed. Once the file is present, the readiness probe should start working properly. you should be able to deploy your K3s deployment without any further issues.
For more info follow this doc.

Unable to access minikube IP address

I am an absolute beginner to Kubernetes, and I was following this tutorial to get started. I have managed writing the yaml files. However once I deploy it, I am not able to access the web app.
This is my webapp yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
ports:
- containerPort: 3000
env:
- name: USER_NAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: USER_PWD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
apiVersion: v1
kind: Service
metadata:
name: webapp-servicel
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30200
When I run the command : kubectl get node
When I run the command: kubectl get pods, i can see the pods running
kubectl get svc
I then checked the logs for webapp, I dont see any errors
I then checked the details logs by running the command: kubectl describe pod podname
I dont see any obvious errors in the result above, but again I am not experienced enough to check if there is any config thats not set properly.
Other things I have done as troubleshooting
Ran the following command for the minikube to open up the app : minikube service webapp-servicel, it opens up the web page, but again does not connect to the IP.
Uninstalled minikube, kubectl and all relevant folders, and run everything again.
pinged the ip address directly from command line, and cannot reach.
I would appreciate if someone can help me fix this.
Try these 3 options
can you do the kubectl get node -o wide and get the ip address of node and then open in web browser NODE_IP_ADDRESS:30200
Alternative you can run this command minikube service <SERVICE_NAME> --url which will give you direct url to access application and access the url in web browser.
kubectl port-forward svc/<SERVICE_NAME> 3000:3000
and access application on localhost:3000
Ran the following command for the minikube to open up the app : minikube service webapp-servicel, it opens up the web page, but again does not connect to the IP.
Uninstalled minikube, kubectl and .kube and run everything again.
pinged the ip address directly from command line, and cannot reach.
I suggest you to try port forwarding
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
kubectl port-forward svc/x-service NodePort:Port
I got stuck here as well. After looking through some of the gitlab issues, I found a helpful tip about the minikube driver. The instructions for starting minikub are incorrect in the video if you used
minikube start -driver docker
Here's how to fix your problem.
stop minikube
minikube stop
delete minikube (this deletes your cluster)
minikube delete
start up minikube again, but this time specify the hyperkit driver
minikube start --vm-driver=hyperkit
check status
minikube status
reapply your components in this order by.
kubectl apply -f mongo-config.yaml
kubectl apply -f mongo-secret.yaml
kubectl apply -f mongo.yaml
kubectl aplly -f webapp.yaml
get your ip
minikube ip
open a browser, go to ip address:30200 (or whatever the port you defined was, mine was 30100). You should see an image of a dog and a form.
Some information in this SO post is useful too.
On Windows 11 with Ubuntu 20.04 WSL, it worked for me by using:
minikube start --driver=hyperv
On Windows 10 with Docker-Desktop one can even do not need to use minikube. Just enable Kubernetes in Docker-Desktop settings and use kubectl. Check the link for further information.
Using Kubernetes of Docker-Desktop I could simply reach webapp with localhost:30100. In my case, for some reason I had to pull mongo docker image manually with docker pull mongo:5.0.

How to resolve this issue ErrImageNeverPull pod creation status kubernetes

I am creating a pod from an image which resides on the master node. When I create a pod on the master node to be scheduled on the worker node, I get the status of pod ErrImageNeverPull
kind: Pod
metadata:
name: cloud-pipe
labels:
app: cloud-pipe
spec:
containers:
- name: cloud-pipe
image: cloud-pipeline:latest
command: ["sleep"]
args: ["infinity"]
Kubectl describe pod details:
Type Reason Age From Message
- --- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned
default/cloud-pipe to knode
Warning ErrImageNeverPull 5m54s (x49 over 16m) kubelet Container image "cloud-
pipeline:latest" is not present with pull policy of Never
Warning Failed 51s (x72 over 16m) kubelet Error: ErrImageNeverPull
How to resolve this issue. Also, my question is does Kubernetes by default looks on the worker node for the image to exist?. Thanks
When kubernetes creates containers, it first looks to local images, and then will try registry(docker registry by default)
You are getting this error because:
your image cant be found localy on your node.
you specified imagePullPolicy: Never, so you will never try to download image from registry
You have few ways of resolving this, but all of them generally instruct you to get image locally and tag it properly.
To get image on your node you can:
copy images from one node to another
build image from existing Dockerfile
Once you get image, tag it and specify in the deployment
docker tag cloud-pipeline:latest mytest:mytest
kind: Pod
metadata:
name: cloud-pipe
labels:
app: cloud-pipe
spec:
containers:
- name: cloud-pipe
image: mytest:mytest
imagePullPolicy: Never
command: ["sleep"]
args: ["infinity"]
Or you can configure own local registry, push tagged image into it, and use imagePullPolicy: IfNotPresent. More information in #dryairship answer
Also please be sure using eval $(minikube docker-env) for imagePullPolicy: Never images, in case you are using minikube (you havent specified any tag, but it can be helpful). More information in Getting “ErrImageNeverPull” in pods question

Container not maintaining its state using kubernetes?

I have a service which runs in apache. The container status is showing as completed and restarting. Why container is not maintaining its state as running even though the arguments passed does not have issues?
apiVersion: apps/v1
kind: Deployment
metadata:
name: ***
spec:
selector:
matchLabels:
app: ***
replicas: 1
template:
metadata:
labels:
app: ***
spec:
containers:
- name: ***
image: ****
command: ["/bin/sh", "-c"]
args: ["echo\ sid\ |\ sudo\ -S\ service\ mysql\ start\ &&\ sudo\ service\ apache2\ start"]
volumeMounts:
- mountPath: /var/log/apache2/
name: apache
- mountPath: /var/log/***/
name: ***
imagePullSecrets:
- name: regcred
volumes:
- name: apache
hostPath:
path: "/home/sandeep/logs/apache"
- name: vusmartmaps
hostPath:
path: "/home/sandeep/logs/***"
Soon after executing this arguments it is showing its status as completed and going to a loop. What we can do to maintain it status as running.
Please be advised this is not a good practice.
If you really want this working that way your last process must not end.
For example add sleep 9999 to your container.args
Best options would be splitting those into 2 separate Deployments.
First, would be easy to scale them independently.
Second, image would be smaller for each Deployment.
Third, Kubernetes would have a full control over those Deployments and you could utilize self-healing and rolling-updates.
There is a really good guide and examples on Deploying WordPress and MySQL with Persistent Volumes, which I think would be perfect for you.
But if you prefer to use just one pod then you would need to split you image or using official Docker images and your pod might look like this:
apiVersion: v1
kind: Pod
metadata:
name: app
labels:
app: test
spec:
containers:
- name: mysql
image: mysql:5.6
- name: apache
image: httpd:alpine
ports:
- containerPort: 80
volumeMounts:
- name: apache
mountPath: /var/log/apache2/
volumes:
- name: apache
hostPath:
path: "/home/sandeep/logs/apache"
You would need to expose the pod using Service:
$ kubectl expose pod app --type=NodePort --port=80
service "app" exposed
Checking what port it has:
$ kubectl describe service app
...
NodePort: <unset> 31418/TCP
...
Also you should read Communicate Between Containers in the Same Pod Using a Shared Volume.
You want to start apache and mysql in the same container and keep it running, aren't you?
Well, lets break down why it exits first. Kubernetes, just like Docker, will run whatever command you would give inside the container. If that command finishes, container would stop. echo sid | sudo -S service mysql start && sudo service apache2 start will ask your init process to start both mysql and apache, but the thing is that Kubernetes is not aware of your init inside the container.
In fact, the command statement will become instead of init process with pid 1, overriding whatever default startup command you have in your container image. Whenever process with pid 1 exits, container stops.
Therefore in your case you have to start whatever init system you have in your container.
However we come closer to another problem - Kubernetes already acts as init system. It starts your pods and supervises them. Therefore all you need is to start two containers instead - one for mysql and another one for apache.
For example you could use official dockerhub images from https://hub.docker.com//httpd/ and https://hub.docker.com//mysql. They already come with both services configured to startup correctly, therefore you don't even have to specify command and args in your deployment manifest.
Containers are not tiny VMs. You need two in this case, one running MySQL and another running Apache. Both have standard community images available, which I would probably start with.

Chaining container error in Kubernetes

I am new to kubernetes and docker. I am trying to chain 2 containers in a pod such that the second container should not be up until the first one is running. I searched and got a solution here. It says to add "depends" field in YAML file for the container which is dependent on another container. Following is a sample of my YAML file:
apiVersion: v1beta4
kind: Pod
metadata:
name: test
labels:
apps: test
spec:
containers:
- name: container1
image: <image-name>
ports:
- containerPort: 8080
hostPort: 8080
- name: container2
image: <image-name>
depends: ["container1"]
Kubernetes gives me following error after running the above yaml file:
Error from server (BadRequest): error when creating "new.yaml": Pod in version "v1beta4" cannot be handled as a Pod: no kind "Pod" is registered for version "v1beta4"
Is the apiVersion problem here? I even tried v1, apps/v1, extensions/v1 but got following errors (respectively):
error: error validating "new.yaml": error validating data: ValidationError(Pod.spec.containers[1]): unknown field "depends" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
error: unable to recognize "new.yaml": no matches for apps/, Kind=Pod
error: unable to recognize "new.yaml": no matches for extensions/, Kind=Pod
What am I doing wrong here?
As I understand there is no field called depends in the Pod Specification.
You can verify and validate by following command:
kubectl explain pod.spec --recursive
I have attached a link to understand the structure of the k8s resources.
kubectl-explain
There is no property "depends" in the Container API object.
You split your containers in two different pods and let the kubernetes cli wait for the first container to become available:
kubectl create -f container1.yaml --wait # run command until the pod is available.
kubectl create -f container2.yaml --wait