Could the status be "Running" if I run "kubectl get pod freebox"? - kubernetes

Apply the following YAML file into a Kubernetes cluster:
apiVersion: v1
kind: Pod
metadata:
name: freebox
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
Could the status be "Running" if I run kubectl get pod freebox? Why?

If formatting errors are ignored , no pod wont be in running status :
controlplane $ kubectl get pods freebox
NAME READY STATUS RESTARTS AGE
freebox 0/1 CrashLoopBackOff 3 81s
Becuase if you look at Dockerfile of busy box , The CMD argument "sh" which will complete immediately so pod gets restarted ( becuase default restart policy is always')
https://hub.docker.com/layers/busybox/library/busybox/latest/images/sha256-bc02457f8f5a4a3cd931028ec76c7468cfa8b44d7d89c4a91df1fd82285da681?context=explore
ADD file ... in /708.51 KB
CMD ["sh"]
see the describe of the pod as following :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8s default-scheduler Successfully assigned default/freebox to node01
Normal Pulled 7s (x2 over 8s) kubelet, node01 Container image "busybox:latest" already present on machine
Normal Created 6s (x2 over 7s) kubelet, node01 Created container busybox
Normal Started 6s (x2 over 7s) kubelet, node01 Started container busybox
Warning BackOff 5s (x2 over 6s) kubelet, node01 Back-off restarting failed container

the busybox image need to run a command for running.
add the command in the .spec.containers section under the busybox container
apiVersion: v1
kind: Pod
metadata:
name: freebox
spec:
containers:
- name: busybox
command:
- sleep
- 4800
image: busybox:latest
imagePullPolicy: IfNotPresent

Related

kubernetes pod (mssql-tools) failing with CrashLoopBackOff error and restarting

I'm using Rancher Dekstop for K8 in WSL 2 in Windows 11.
I'm trying to create a pod using the simple yaml:
apiVersion: v1
kind: Pod
metadata:
name: mssql-tools
labels:
name: mssql-tools
spec:
containers:
- name: mssql-tools
image: mcr.microsoft.com/mssql-tools:latest
But it is continuously giving CrashLoopBackOff error.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mssql-tools 0/1 CrashLoopBackOff 11 (8s ago) 14m
And here is the result of kubectl describe pod mssql-tool:
$ kubectl describe pod mssql-tools
Name: mssql-tools
Namespace: default
Priority: 0
Service Account: default
Node: desktop-2ohsprk/172.22.97.204
Start Time: Mon, 26 Dec 2022 04:34:19 +0500
Labels: name=mssql-tools
Annotations: <none>
Status: Running
IP: 10.42.0.57
IPs:
IP: 10.42.0.57
Containers:
mssql-tools:
Container ID: docker://76343010f4344a5d26fb35f3b0278271d3336e8e10d695cc22e78520262f34bf
Image: mcr.microsoft.com/mssql-tools:latest
Image ID: docker-pullable://mcr.microsoft.com/mssql-tools#sha256:62556500522072535cb3df2bb5965333dded9be47000473e9e0f84118e248642
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 26 Dec 2022 04:46:20 +0500
Finished: Mon, 26 Dec 2022 04:46:20 +0500
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 26 Dec 2022 04:45:51 +0500
Finished: Mon, 26 Dec 2022 04:45:51 +0500
Ready: False
Restart Count: 9
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wkqlg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-wkqlg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned default/mssql-tools to desktop-2ohsprk
Normal Pulled 12m kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 1.459473213s
Normal Pulled 12m kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 823.403008ms
Normal Pulled 11m kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 835.697509ms
Normal Pulled 11m kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 873.802598ms
Normal Created 11m (x4 over 12m) kubelet Created container mssql-tools
Normal Started 11m (x4 over 12m) kubelet Started container mssql-tools
Normal Pulling 10m (x5 over 12m) kubelet Pulling image "mcr.microsoft.com/mssql-tools:latest"
Normal Pulled 10m kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 740.64559ms
Warning BackOff 6m56s (x25 over 11m) kubelet Back-off restarting failed container
Normal SandboxChanged 50s kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 48s kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 951.332457ms
Normal Pulled 32s kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 828.839917ms
Normal Pulling 4s (x3 over 49s) kubelet Pulling image "mcr.microsoft.com/mssql-tools:latest"
Normal Pulled 3s kubelet Successfully pulled image "mcr.microsoft.com/mssql-tools:latest" in 713.951656ms
Normal Created 3s (x3 over 48s) kubelet Created container mssql-tools
Normal Started 3s (x3 over 48s) kubelet Started container mssql-tools
Warning BackOff 2s (x5 over 47s) kubelet Back-off restarting failed container
The same container works perfectly if I run it via docker and I can use its shell to execute sqlcmd properly.
I can't figure out any reason for this.
Any help would be really appreciated.
Thanks
Crashloopbackoff is the common error which indicates that pod failed to start and it continued to fail repeatedly when kubernetes tried to restart this.
To troubleshoot this issue follow the below steps:
Check for “Back off Restarting Failed Container” by running the command Run kubectl describe pod [name].
If you get a Liveness probe failed and Back-off restarting failed container messages from the kubelet, this indicates the container is not responding and is in the process of restarting.
Check from the previous container instance. Run kubectl get pods to identify the Kubernetes pod that causes CrashLoopBackOff error. You can run kubectl logs --previous --tail 10command to get the last ten log lines from the pod.
Check deployment logs by running the command: kubectl logs -f deploy/ -n
Refer to this link for more detailed troubleshooting steps.
So after trying and digging through multiple options, finally it worked by executing the command sleep 3600000 i.e. delaying it so that the pod initializes itself properly and then executes the container.
Here is the working yaml:
apiVersion: v1
kind: Pod
metadata:
name: mssql-tools
labels:
name: mssql-tools
spec:
containers:
- name: mssql-tools
image: mcr.microsoft.com/mssql-tools:latest
command: ["sleep"]
args:
- "3600000"
imagePullPolicy: IfNotPresent
The command and argument passing portion can also be mentioned like the following:
apiVersion: v1
...
...
spec:
containers:
- name: mssql-tools
image: mcr.microsoft.com/mssql-tools:latest
command:
- sleep
- "3600000"
...
and btw, you can also deploy a container by passing a command with the kubectl run command line: i.e.
kubectl run mssql --image=mcr.microsoft.com/mssql-tools --command sleep 3600000 -n myNameSpace
Note: You can omit -n myNameSpace if you are not deploying it in a specific namespace or deploying it in the default namespace.

Why is Kubernetes pod failing to start?

I was trying to test one scenario where pod will mount a volume and it will try to write one file to it. Below mentioned yaml works fine when I exclude command and args. However with command and args it fails with "crashloopbackoff".
The describe command is not providing much information for the failure. What's wrong here?
Note: I was running this yaml on katacoda.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: voltest
name: voltest
spec:
replicas: 1
selector:
matchLabels:
run: voltest
template:
metadata:
creationTimestamp: null
labels:
run: voltest
spec:
containers:
- image: nginx
name: voltest
volumeMounts:
- mountPath: /var/local/aaa
name: mydir
command: ["/bin/sh"]
args: ["-c", "echo 'test complete' > /var/local/aaa/testOut.txt"]
volumes:
- name: mydir
hostPath:
path: /var/local/aaa
type: DirectoryOrCreate
Describe command output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49s default-scheduler Successfully assigned default/voltest-78678dd56c-h5frs to controlplane
Normal Pulling 19s (x3 over 48s) kubelet, controlplane Pulling image "nginx"
Normal Pulled 17s (x3 over 39s) kubelet, controlplane Successfully pulled image "nginx"
Normal Created 17s (x3 over 39s) kubelet, controlplane Created container voltest
Normal Started 17s (x3 over 39s) kubelet, controlplane Started container voltest
Warning BackOff 5s (x4 over 35s) kubelet, controlplane Back-off restarting failed container
You've configured your pod to run a single shell command:
command: ["/bin/sh"]
args: ["-c", "echo 'test complete' > /var/testOut.txt"]
This means that the pod starts up, runs echo 'test complete' > /var/testOut.txt, and then immediately exits. From the perspective
of kubernetes, this is a crash.
You've replaced the default behavior of the nginx image ("run
nginx") with a shell command.
If you want the pod to continue running, you'll need to arrange for it
to run some sort of long-running command. A simple solution would be
something like:
command: ["/bin/sh"]
args: ["-c", "echo 'test complete' > /var/testOut.txt; sleep 3600"]
This will cause the pod to sleep for an hour before exiting, giving
you time to inspect the results of your shell command.
Note that your shell command isn't testing anything useful; you've
mounted your mydir volume on /var/local/aaa, but your shell
command is writing to /var/testOut.txt, so it's not making any use
of the volume.

kubectl get pods shows CrashLoopBackoff

Im trying to create a pod using my local docker image as follow.
1.First I run this command in terminal
eval $(minikube docker-env)
2.I created a docker image as follow
sudo docker image build -t my-first-image:3.0.0 .
3.I created the pod.yml as shown below and I run this command
kubectl -f create pod.yml.
4.then i tried to run this command
kubectl get pods
but it shows following error
NAME READY STATUS RESTARTS AGE
multiplication-6b6d99554-d62kk 0/1 CrashLoopBackOff 9 22m
multiplication2019-5b4555bcf4-nsgkm 0/1 CrashLoopBackOff 8 17m
my-first-pod 0/1 CrashLoopBackOff 4 2m51
5.i get the pods logs
kubectl describe pod my-first-pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m22s default-scheduler Successfully assigned default/my-first-pod to minikube
Normal Pulled 5m20s (x4 over 6m17s) kubelet, minikube Successfully pulled image "docker77nira/myfirstimage:latest"
Normal Created 5m20s (x4 over 6m17s) kubelet, minikube Created container
Normal Started 5m20s (x4 over 6m17s) kubelet, minikube Started container
Normal Pulling 4m39s (x5 over 6m21s) kubelet, minikube pulling image "docker77nira/myfirstimage:latest"
Warning BackOff 71s (x26 over 6m12s) kubelet, minikube Back-off restarting failed container
Dockerfile
FROM node:carbon
WORKDIR /app
COPY . .
CMD [ "node", "index.js" ]
pods.yml
kind: Pod
apiVersion: v1
metadata:
name: my-first-pod
spec:
containers:
- name: my-first-container
image: my-first-image:3.0.0
index.js
var http = require('http');
var server = http.createServer(function(request, response) {
response.statusCode = 200;
response.setHeader('Content-Type', 'text/plain');
response.end('Welcome to the Golden Guide to Kubernetes
Application Development!');
});
server.listen(3000, function() {
console.log('Server running on port 3000');
});
Try checking logs with command kubectl logs -f my-first-pod
I succeeded in running your image by performing these steps:
docker build -t foo .
then check if the container is working docker run -it foo
/app/index.js:5
response.end('Welcome to the Golden Guide to Kubernetes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: Invalid or unexpected token
at createScript (vm.js:80:10)
at Object.runInThisContext (vm.js:139:10)
at Module._compile (module.js:617:28)
at Object.Module._extensions..js (module.js:664:10)
at Module.load (module.js:566:32)
at tryModuleLoad (module.js:506:12)
at Function.Module._load (module.js:498:3)
at Function.Module.runMain (module.js:694:10)
at startup (bootstrap_node.js:204:16)
at bootstrap_node.js:625:3
Not sure if this was the outcome you wanted to see, the container itself runs. But in Kubernetes it gets into ErrImagePull
Then after editing your Pod.yaml inspired by #Harsh Manvar it works fine with this. So the problem with exiting after completed command was just part of the problem.
apiVersion: v1
kind: Pod
metadata:
name: hello-pod
spec:
restartPolicy: Never
containers:
- name: hello
image: "foo"
imagePullPolicy: Never
command: [ "sleep" ]
args: [ "infinity" ]
This is Minikube so you can reuse the images, but if you would have more nodes this might not work at all. You can find a good explanation about using local docker images with Kubernetes here.
kind: Pod
apiVersion: v1
metadata:
name: my-first-pod
spec:
containers:
- name: my-first-container
image: my-first-image:3.0.0
command: [ "sleep" ]
args: [ "infinity" ]
I think your pod is getting terminated after execution of script inside index.js

problems with the "alpine" image for my initContainers

People,
I am trying to create a simple file /tmp/tarte.test with initContainers. I have a constraint, using an alpine image for the container. Please let me know what is NOT in this simple yaml file.
apiVersion: v1
kind: Pod
metadata:
name: initonpod
namespace: prod
labels:
app: myapp
spec:
containers:
- name: mycont-nginx
image: alpine
initContainers:
- name: myinit-cont
image: alpine
imagePullPolicy: IfNotPresent
command:
- touch
- "/tmp/tarte.test"
- sleep 200
the describe of the pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9s default-scheduler Successfully assigned prod/initonpod to k8s-node-1
Normal Pulled 8s kubelet, k8s-node-1 Container image "alpine" already present on machine
Normal Created 8s kubelet, k8s-node-1 Created container
Normal Started 7s kubelet, k8s-node-1 Started container
Normal Pulling 4s (x2 over 7s) kubelet, k8s-node-1 pulling image "alpine"
Normal Pulled 1s (x2 over 6s) kubelet, k8s-node-1 Successfully pulled image "alpine"
Normal Created 1s (x2 over 5s) kubelet, k8s-node-1 Created container
Normal Started 1s (x2 over 5s) kubelet, k8s-node-1 Started container
Warning BackOff 0s kubelet, k8s-node-1 Back-off restarting failed container
And if I change the alpine image for an nginx image container... it's work good.
Back-off restarting failed container because of your container spec.
spec:
containers:
- name: mycont-nginx
image: alpine
This alpine container doesn't run forever. In kubernetes, container has to run forever.That's why you are getting error. When you use nginx image, it runs forever. So to use alpine image change the spec as below:
apiVersion: v1
kind: Pod
metadata:
name: busypod
labels:
app: busypod
spec:
containers:
- name: busybox
image: alpine
command:
- "sh"
- "-c"
- >
while true; do
sleep 3600;
done
initContainers:
- name: myinit-cont
image: alpine
imagePullPolicy: IfNotPresent
command:
- touch
- "/tmp/tarte.test"
- sleep 200

Trying to create a Kubernetes deployment but it shows 0 pods available

I'm new to k8s, so some of my terminology might be off. But basically, I'm trying to deploy a simple web api: one load balancer in front of n pods (where right now, n=1).
However, when I try to visit the load balancer's IP address it doesn't show my web application. When I run kubectl get deployments, I get this:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
tl-api 1 1 1 0 4m
Here's my YAML file. Let me know if anything looks off--I'm very new to this!
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: tl-api
spec:
replicas: 1
template:
metadata:
labels:
app: tl-api
spec:
containers:
- name: tl-api
image: tlk8s.azurecr.io/devicecloudwebapi:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: acr-auth
nodeSelector:
beta.kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
name: tl-api
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: tl-api
Edit 2: When I try using ACS (which supports Windows), I get this:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned tl-api-3466491809-vd5kg to dc9ebacs9000
Normal SuccessfulMountVolume 11m kubelet, dc9ebacs9000 MountVolume.SetUp succeeded for volume "default-token-v3wz9"
Normal Pulling 4m (x6 over 10m) kubelet, dc9ebacs9000 pulling image "tlk8s.azurecr.io/devicecloudwebapi:v1"
Warning FailedSync 1s (x50 over 10m) kubelet, dc9ebacs9000 Error syncing pod
Normal BackOff 1s (x44 over 10m) kubelet, dc9ebacs9000 Back-off pulling image "tlk8s.azurecr.io/devicecloudwebapi:v1"
I then try examining the failed pod:
PS C:\users\<me>\source\repos\DeviceCloud\DeviceCloud\1- Presentation\DeviceCloud.Web.API> kubectl logs tl-api-3466491809-vd5kg
Error from server (BadRequest): container "tl-api" in pod "tl-api-3466491809-vd5kg" is waiting to start: trying and failing to pull image
When I run docker images I see the following:
REPOSITORY TAG IMAGE ID CREATED SIZE
devicecloudwebapi latest ee3d9c3e231d 24 hours ago 7.85GB
tlk8s.azurecr.io/devicecloudwebapi v1 ee3d9c3e231d 24 hours ago 7.85GB
devicecloudwebapi dev bb33ab221910 25 hours ago 7.76GB
Your problem is that the container image tlk8s.azurecr.io/devicecloudwebapi:v1 is in a private container registry. See the events at the bottom of the following command:
$ kubectl describe po -l=app=tl-api
The official Kubernetes docs describe how to resolve this issue, see Pull an Image from a Private Registry, essentially:
Create a secret kubectl create secret docker-registry
Use it in your deployment, under the spec.imagePullSecrets key