I was trying to test one scenario where pod will mount a volume and it will try to write one file to it. Below mentioned yaml works fine when I exclude command and args. However with command and args it fails with "crashloopbackoff".
The describe command is not providing much information for the failure. What's wrong here?
Note: I was running this yaml on katacoda.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: voltest
name: voltest
spec:
replicas: 1
selector:
matchLabels:
run: voltest
template:
metadata:
creationTimestamp: null
labels:
run: voltest
spec:
containers:
- image: nginx
name: voltest
volumeMounts:
- mountPath: /var/local/aaa
name: mydir
command: ["/bin/sh"]
args: ["-c", "echo 'test complete' > /var/local/aaa/testOut.txt"]
volumes:
- name: mydir
hostPath:
path: /var/local/aaa
type: DirectoryOrCreate
Describe command output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49s default-scheduler Successfully assigned default/voltest-78678dd56c-h5frs to controlplane
Normal Pulling 19s (x3 over 48s) kubelet, controlplane Pulling image "nginx"
Normal Pulled 17s (x3 over 39s) kubelet, controlplane Successfully pulled image "nginx"
Normal Created 17s (x3 over 39s) kubelet, controlplane Created container voltest
Normal Started 17s (x3 over 39s) kubelet, controlplane Started container voltest
Warning BackOff 5s (x4 over 35s) kubelet, controlplane Back-off restarting failed container
You've configured your pod to run a single shell command:
command: ["/bin/sh"]
args: ["-c", "echo 'test complete' > /var/testOut.txt"]
This means that the pod starts up, runs echo 'test complete' > /var/testOut.txt, and then immediately exits. From the perspective
of kubernetes, this is a crash.
You've replaced the default behavior of the nginx image ("run
nginx") with a shell command.
If you want the pod to continue running, you'll need to arrange for it
to run some sort of long-running command. A simple solution would be
something like:
command: ["/bin/sh"]
args: ["-c", "echo 'test complete' > /var/testOut.txt; sleep 3600"]
This will cause the pod to sleep for an hour before exiting, giving
you time to inspect the results of your shell command.
Note that your shell command isn't testing anything useful; you've
mounted your mydir volume on /var/local/aaa, but your shell
command is writing to /var/testOut.txt, so it's not making any use
of the volume.
Related
Apply the following YAML file into a Kubernetes cluster:
apiVersion: v1
kind: Pod
metadata:
name: freebox
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
Could the status be "Running" if I run kubectl get pod freebox? Why?
If formatting errors are ignored , no pod wont be in running status :
controlplane $ kubectl get pods freebox
NAME READY STATUS RESTARTS AGE
freebox 0/1 CrashLoopBackOff 3 81s
Becuase if you look at Dockerfile of busy box , The CMD argument "sh" which will complete immediately so pod gets restarted ( becuase default restart policy is always')
https://hub.docker.com/layers/busybox/library/busybox/latest/images/sha256-bc02457f8f5a4a3cd931028ec76c7468cfa8b44d7d89c4a91df1fd82285da681?context=explore
ADD file ... in /708.51 KB
CMD ["sh"]
see the describe of the pod as following :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8s default-scheduler Successfully assigned default/freebox to node01
Normal Pulled 7s (x2 over 8s) kubelet, node01 Container image "busybox:latest" already present on machine
Normal Created 6s (x2 over 7s) kubelet, node01 Created container busybox
Normal Started 6s (x2 over 7s) kubelet, node01 Started container busybox
Warning BackOff 5s (x2 over 6s) kubelet, node01 Back-off restarting failed container
the busybox image need to run a command for running.
add the command in the .spec.containers section under the busybox container
apiVersion: v1
kind: Pod
metadata:
name: freebox
spec:
containers:
- name: busybox
command:
- sleep
- 4800
image: busybox:latest
imagePullPolicy: IfNotPresent
Pod containers are not ready and stuck under Waiting state over and over every single time after they run sh commands (/bin/sh as well).
As example all pod's containers seen at https://v1-17.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-with-data-from-multiple-configmaps they just go on "Complete" status after executing the sh command, or if I set "restartPolicy: Always" they have the "Waiting" state for the reason CrashLoopBackOff.
(Containers work fine if I do not set any command on them.
If I use the sh command within container, after creating them I can read using "kubectl logs" the env variable was set correctly.
The expected behaviour is to get pod's containers running after they execute the sh command.
I cannot find references regarding this particular problem and I need little help if possible, thank you very much in advance!
Please disregard I tried different images, the problem happens either way.
environment: Kubernetes v 1.17.1 on qemu VM
yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
data:
how: very
---
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: nginx
ports:
- containerPort: 88
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: how
restartPolicy: Always
describe pod:
kubectl describe pod dapi-test-pod
Name: dapi-test-pod
Namespace: default
Priority: 0
Node: kw1/10.1.10.31
Start Time: Thu, 21 May 2020 01:02:17 +0000
Labels: <none>
Annotations: cni.projectcalico.org/podIP: 192.168.159.83/32
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"dapi-test-pod","namespace":"default"},"spec":{"containers":[{"command...
Status: Running
IP: 192.168.159.83
IPs:
IP: 192.168.159.83
Containers:
test-container:
Container ID: docker://63040ec4d0a3e78639d831c26939f272b19f21574069c639c7bd4c89bb1328de
Image: nginx
Image ID: docker-pullable://nginx#sha256:30dfa439718a17baafefadf16c5e7c9d0a1cde97b4fd84f63b69e13513be7097
Port: 88/TCP
Host Port: 0/TCP
Command:
/bin/sh
-c
env
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 21 May 2020 01:13:21 +0000
Finished: Thu, 21 May 2020 01:13:21 +0000
Ready: False
Restart Count: 7
Environment:
SPECIAL_LEVEL_KEY: <set to the key 'how' of config map 'special-config'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbsw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-zqbsw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zqbsw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned default/dapi-test-pod to kw1
Normal Pulling 12m (x4 over 13m) kubelet, kw1 Pulling image "nginx"
Normal Pulled 12m (x4 over 13m) kubelet, kw1 Successfully pulled image "nginx"
Normal Created 12m (x4 over 13m) kubelet, kw1 Created container test-container
Normal Started 12m (x4 over 13m) kubelet, kw1 Started container test-container
Warning BackOff 3m16s (x49 over 13m) kubelet, kw1 Back-off restarting failed container
You can use this manifest; The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. Multiline args make it simple and easy to read. Your pod will display its environment variables and also start the NGINX process without stopping:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
data:
how: very
---
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: nginx
ports:
- containerPort: 88
command: ["/bin/sh", "-c"]
args:
- env;
nginx -g 'daemon off;';
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: how
restartPolicy: Always
This happens because the process in the container you are running has completed and the container shuts down, and so kubernetes marks the pod as completed.
If the command that is defined in the docker image as part of CMD, or if you've added your own command as you have done, then the container shuts down after the command completed. It's the same reason why when you run Ubuntu using plain docker, it starts up then shuts down directly afterwards.
For pods, and their underlying docker container to continue running, you need to start a process that will continue running. In your case, running the env command completes right away.
If you set the pod to restart Always, then kubernetes will keep trying to restart it until it's reached it's back off threshold.
One off commands like you're running are useful for utility type things. I.e. do one thing then get rid of the pod.
For example:
kubectl run tester --generator run-pod/v1 --image alpine --restart Never --rm -it -- /bin/sh -c env
To run something longer, start a process that continues running.
For example:
kubectl run tester --generator run-pod/v1 --image alpine -- /bin/sh -c "sleep 30"
That command will run for 30 seconds, and so the pod will also run for 30 seconds. It will also use the default restart policy of Always. So after 30 seconds the process completes, Kubernetes marks the pod as complete, and then restarts it to do the same things again.
Generally pods will start a long running process, like a web server. For Kubernetes to know if that pod is healthy, so it can do it's high availability magic and restart it if it cashes, it can use readiness and liveness probes.
Is there a way to request the status of a readinessProbe by using a service name linked to a deployment ? In an initContainer for example ?
Imagine we have a deployment X, using a readinessProbe, a service linked to it so we can request for example http://service-X:8080.
Now we create a deployment Y, in the initContainer we want to know if deployment X is ready. Is there a way to ask something likedeployment-X.ready or service-X.ready ?
I know that the correct way to handle dependencies is to let kubernetes do it for us, but i have a container which doesn't crash and I have no hand on it...
You can add a ngnix proxy sidecar on deployment Y.
Set the deploymentY.initContainer.readynessProbe to a port on nginx and that port is proxied to deploymentY.readynessProbe
Instead of readinessProbe You can use just InitContainer.
You create a pod/deployment X, make service X, and create a initContainer which is searching for the service X.
If he find it -> he will make the pod.
If he won't find it -> he will keep looking until service X will be created.
Just a simple example, we create nginx deployment by using kubectl apply -f nginx.yaml.
nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
Then we create initContainer
initContainer.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'until nslookup my-nginx; do echo waiting for myapp-pod2; sleep 2; done;']
initContainer will look for service my-nginx, until You create it ,it will be in Init:0/1 status.
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/1 0 15m
After You add service for example by using kubectl expose deployment/my-nginx and initContainer will find my-nginx service, he will be created.
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 35m
Result:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/myapp-pod to kubeadm2
Normal Pulled 20s kubelet, kubeadm2 Container image "busybox:1.28" already present on machine
Normal Created 20s kubelet, kubeadm2 Created container init-myservice
Normal Started 20s kubelet, kubeadm2 Started container init-myservice
Normal Pulled 20s kubelet, kubeadm2 Container image "busybox:1.28" already present on machine
Normal Created 20s kubelet, kubeadm2 Created container myapp-container
Normal Started 20s kubelet, kubeadm2 Started container myapp-container
Let me know if that answer your question.
I finaly found a solution by following this link :
https://blog.giantswarm.io/wait-for-it-using-readiness-probes-for-service-dependencies-in-kubernetes/
We first need to create a ServiceAccount in Kubernetes to allow listing endpoints from an initContainer. After this, we ask for the available endpoints, if there is at least one, dependency is ready (in my case).
Im trying to create a pod using my local docker image as follow.
1.First I run this command in terminal
eval $(minikube docker-env)
2.I created a docker image as follow
sudo docker image build -t my-first-image:3.0.0 .
3.I created the pod.yml as shown below and I run this command
kubectl -f create pod.yml.
4.then i tried to run this command
kubectl get pods
but it shows following error
NAME READY STATUS RESTARTS AGE
multiplication-6b6d99554-d62kk 0/1 CrashLoopBackOff 9 22m
multiplication2019-5b4555bcf4-nsgkm 0/1 CrashLoopBackOff 8 17m
my-first-pod 0/1 CrashLoopBackOff 4 2m51
5.i get the pods logs
kubectl describe pod my-first-pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m22s default-scheduler Successfully assigned default/my-first-pod to minikube
Normal Pulled 5m20s (x4 over 6m17s) kubelet, minikube Successfully pulled image "docker77nira/myfirstimage:latest"
Normal Created 5m20s (x4 over 6m17s) kubelet, minikube Created container
Normal Started 5m20s (x4 over 6m17s) kubelet, minikube Started container
Normal Pulling 4m39s (x5 over 6m21s) kubelet, minikube pulling image "docker77nira/myfirstimage:latest"
Warning BackOff 71s (x26 over 6m12s) kubelet, minikube Back-off restarting failed container
Dockerfile
FROM node:carbon
WORKDIR /app
COPY . .
CMD [ "node", "index.js" ]
pods.yml
kind: Pod
apiVersion: v1
metadata:
name: my-first-pod
spec:
containers:
- name: my-first-container
image: my-first-image:3.0.0
index.js
var http = require('http');
var server = http.createServer(function(request, response) {
response.statusCode = 200;
response.setHeader('Content-Type', 'text/plain');
response.end('Welcome to the Golden Guide to Kubernetes
Application Development!');
});
server.listen(3000, function() {
console.log('Server running on port 3000');
});
Try checking logs with command kubectl logs -f my-first-pod
I succeeded in running your image by performing these steps:
docker build -t foo .
then check if the container is working docker run -it foo
/app/index.js:5
response.end('Welcome to the Golden Guide to Kubernetes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: Invalid or unexpected token
at createScript (vm.js:80:10)
at Object.runInThisContext (vm.js:139:10)
at Module._compile (module.js:617:28)
at Object.Module._extensions..js (module.js:664:10)
at Module.load (module.js:566:32)
at tryModuleLoad (module.js:506:12)
at Function.Module._load (module.js:498:3)
at Function.Module.runMain (module.js:694:10)
at startup (bootstrap_node.js:204:16)
at bootstrap_node.js:625:3
Not sure if this was the outcome you wanted to see, the container itself runs. But in Kubernetes it gets into ErrImagePull
Then after editing your Pod.yaml inspired by #Harsh Manvar it works fine with this. So the problem with exiting after completed command was just part of the problem.
apiVersion: v1
kind: Pod
metadata:
name: hello-pod
spec:
restartPolicy: Never
containers:
- name: hello
image: "foo"
imagePullPolicy: Never
command: [ "sleep" ]
args: [ "infinity" ]
This is Minikube so you can reuse the images, but if you would have more nodes this might not work at all. You can find a good explanation about using local docker images with Kubernetes here.
kind: Pod
apiVersion: v1
metadata:
name: my-first-pod
spec:
containers:
- name: my-first-container
image: my-first-image:3.0.0
command: [ "sleep" ]
args: [ "infinity" ]
I think your pod is getting terminated after execution of script inside index.js
People,
I am trying to create a simple file /tmp/tarte.test with initContainers. I have a constraint, using an alpine image for the container. Please let me know what is NOT in this simple yaml file.
apiVersion: v1
kind: Pod
metadata:
name: initonpod
namespace: prod
labels:
app: myapp
spec:
containers:
- name: mycont-nginx
image: alpine
initContainers:
- name: myinit-cont
image: alpine
imagePullPolicy: IfNotPresent
command:
- touch
- "/tmp/tarte.test"
- sleep 200
the describe of the pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9s default-scheduler Successfully assigned prod/initonpod to k8s-node-1
Normal Pulled 8s kubelet, k8s-node-1 Container image "alpine" already present on machine
Normal Created 8s kubelet, k8s-node-1 Created container
Normal Started 7s kubelet, k8s-node-1 Started container
Normal Pulling 4s (x2 over 7s) kubelet, k8s-node-1 pulling image "alpine"
Normal Pulled 1s (x2 over 6s) kubelet, k8s-node-1 Successfully pulled image "alpine"
Normal Created 1s (x2 over 5s) kubelet, k8s-node-1 Created container
Normal Started 1s (x2 over 5s) kubelet, k8s-node-1 Started container
Warning BackOff 0s kubelet, k8s-node-1 Back-off restarting failed container
And if I change the alpine image for an nginx image container... it's work good.
Back-off restarting failed container because of your container spec.
spec:
containers:
- name: mycont-nginx
image: alpine
This alpine container doesn't run forever. In kubernetes, container has to run forever.That's why you are getting error. When you use nginx image, it runs forever. So to use alpine image change the spec as below:
apiVersion: v1
kind: Pod
metadata:
name: busypod
labels:
app: busypod
spec:
containers:
- name: busybox
image: alpine
command:
- "sh"
- "-c"
- >
while true; do
sleep 3600;
done
initContainers:
- name: myinit-cont
image: alpine
imagePullPolicy: IfNotPresent
command:
- touch
- "/tmp/tarte.test"
- sleep 200