the docker official image hello-world keep reporting 'Back-off restarting failed container' with kubectl command [duplicate] - kubernetes

This question already has answers here:
Why is a kubernetes pod with a simple hello-world image getting a CrashLoopBackOff message
(2 answers)
Closed 1 year ago.
I keep seeing Back-off restarting failed container when try to use docker official image https://hub.docker.com/_/hello-world to create any pod\deployment. When I switch to other images like nginx everything works well.
How can I debug and fix this issue?
Below are the event logs after creating the pod with kubectl.
root#ip-10-229-68-221:~# kubectl get event --watch
LAST SEEN TYPE REASON OBJECT MESSAGE
24s Normal Scheduled pod/helloworld-656898b9bb-98vrv Successfully assigned default/helloworld-656898b9bb-98vrv to kind-lab-worker
23s Normal Pulling pod/helloworld-656898b9bb-98vrv Pulling image "hello-world:linux"
16s Normal Pulled pod/helloworld-656898b9bb-98vrv Successfully pulled image "hello-world:linux" in 7.371731633s
1s Normal Created pod/helloworld-656898b9bb-98vrv Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-98vrv Started container hello-world
1s Normal Pulled pod/helloworld-656898b9bb-98vrv Container image "hello-world:linux" already present on machine
13s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
24s Normal Scheduled pod/helloworld-656898b9bb-sg6fs Successfully assigned default/helloworld-656898b9bb-sg6fs to kind-lab-worker
23s Normal Pulling pod/helloworld-656898b9bb-sg6fs Pulling image "hello-world:linux"
13s Normal Pulled pod/helloworld-656898b9bb-sg6fs Successfully pulled image "hello-world:linux" in 9.661065021s
13s Normal Created pod/helloworld-656898b9bb-sg6fs Created container hello-world
13s Normal Started pod/helloworld-656898b9bb-sg6fs Started container hello-world
13s Normal Pulled pod/helloworld-656898b9bb-sg6fs Container image "hello-world:linux" already present on machine
11s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
24s Normal Scheduled pod/helloworld-656898b9bb-vhhfm Successfully assigned default/helloworld-656898b9bb-vhhfm to kind-lab-worker
23s Normal Pulling pod/helloworld-656898b9bb-vhhfm Pulling image "hello-world:linux"
18s Normal Pulled pod/helloworld-656898b9bb-vhhfm Successfully pulled image "hello-world:linux" in 5.17232683s
3s Normal Created pod/helloworld-656898b9bb-vhhfm Created container hello-world
2s Normal Started pod/helloworld-656898b9bb-vhhfm Started container hello-world
3s Normal Pulled pod/helloworld-656898b9bb-vhhfm Container image "hello-world:linux" already present on machine
2s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
24s Normal SuccessfulCreate replicaset/helloworld-656898b9bb Created pod: helloworld-656898b9bb-vhhfm
24s Normal SuccessfulCreate replicaset/helloworld-656898b9bb Created pod: helloworld-656898b9bb-sg6fs
24s Normal SuccessfulCreate replicaset/helloworld-656898b9bb Created pod: helloworld-656898b9bb-98vrv
24s Normal ScalingReplicaSet deployment/helloworld Scaled up replica set helloworld-656898b9bb to 3
79s Normal Killing pod/nginx Stopping container nginx
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Normal Pulled pod/helloworld-656898b9bb-sg6fs Container image "hello-world:linux" already present on machine
0s Normal Created pod/helloworld-656898b9bb-sg6fs Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-sg6fs Started container hello-world
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
0s Normal Pulled pod/helloworld-656898b9bb-vhhfm Container image "hello-world:linux" already present on machine
0s Normal Pulled pod/helloworld-656898b9bb-98vrv Container image "hello-world:linux" already present on machine
0s Normal Created pod/helloworld-656898b9bb-vhhfm Created container hello-world
1s Normal Created pod/helloworld-656898b9bb-98vrv Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-vhhfm Started container hello-world
0s Normal Started pod/helloworld-656898b9bb-98vrv Started container hello-world
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
0s Normal Pulled pod/helloworld-656898b9bb-sg6fs Container image "hello-world:linux" already present on machine
0s Normal Created pod/helloworld-656898b9bb-sg6fs Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-sg6fs Started container hello-world
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Normal Pulled pod/helloworld-656898b9bb-vhhfm Container image "hello-world:linux" already present on machine
0s Normal Created pod/helloworld-656898b9bb-vhhfm Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-vhhfm Started container hello-world
0s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container
0s Normal Pulled pod/helloworld-656898b9bb-98vrv Container image "hello-world:linux" already present on machine
0s Normal Created pod/helloworld-656898b9bb-98vrv Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-98vrv Started container hello-world
0s Warning BackOff pod/helloworld-656898b9bb-98vrv Back-off restarting failed container
0s Warning BackOff pod/helloworld-656898b9bb-vhhfm Back-off restarting failed container
0s Normal Pulled pod/helloworld-656898b9bb-sg6fs Container image "hello-world:linux" already present on machine
0s Normal Created pod/helloworld-656898b9bb-sg6fs Created container hello-world
0s Normal Started pod/helloworld-656898b9bb-sg6fs Started container hello-world
0s Warning BackOff pod/helloworld-656898b9bb-sg6fs Back-off restarting failed container

hello-world image is not a long-running process: it just outputs text and stops.
Kubernetes Pod is by default expecting long-running processes, and if it stops it automatically restart the container.
This behavior is defined by the restartPolicy parameter of a pod and can have different values:
Always: always restart stopped container (the default)
OnFailure: restart containers that exited with !=0
Never: don't restart containers
So in your case you should use one of the 2 last as the container is expected to stop normally.

Related

GKE Deploy issue - Free Tier with credit - Workloads

I am trying to deploy on a minimal cluster and failing
How can I tweak the configuration to make the availability green?
My Input:
My application is a spring- angular (please suggest an easy way where I can deploy both)
My docker-compose created 2 containers. I pushed them to registry (tagged)
When deploying in Workload, I added 1 container after another, and clicked Deploy. The result error is above
Is there a file I need to create - a kind of yml or yaml etc?
kubectl get pods
> NAME READY STATUS RESTARTS AGE
> nginx-1-d...7-2s6hb 0/2 CrashLoopBackOff 18 25m
> nginx-1-6..d7-7645w 0/2 CrashLoopBackOff 18 25m
> nginx-1-6...7-9qgjx 0/2 CrashLoopBackOff 18 25m
Events from describe
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/nginx-1-5d...56xp4 to gke-cluster-huge-default-pool-b6..60-4rj5
Normal Pulling 17m kubelet Pulling image "eu.gcr.io/p..my/py...my_appserver#sha256:479bf3e12ee2b410d730...579b940adc8845be74956f5"
Normal Pulled 17m kubelet Successfully pulled image "eu.gcr.io/py..my/py...emy_appserver#sha256:479bf3e12ee2b4..8b99a178ee05e8579b940adc8845be74956f5" in 11.742649177s
Normal Created 15m (x5 over 17m) kubelet Created container p..my-appserver-sha256-1
Normal Started 15m (x5 over 17m) kubelet Started container p..emy-appserver-sha256-1
Normal Pulled 15m (x4 over 17m) kubelet Container image "eu.gcr.io/py...my/pya...my_appserver#sha256:479bf3e12ee2b41..e05e8579b940adc8845be74956f5" already present on machine
Warning BackOff 2m42s (x64 over 17m) kubelet Back-off restarting failed container

Configuring the Crunchy Data PostgreSQL operator in a Digital Ocean managed kubernetes cluster

I am having trouble configuring the Crunchy Data PostgreSQL operator in my Digital Ocean managed kubernetes cluster. Per their official installation/troubleshooting guide, I changed the default storage classes in the provided manifest file to do-block-storage and I've tried toggling the disable_fsgroup value, all to no avail. I'm getting the following output from running kubectl describe... on the operator pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned pgo/postgres-operator-697fd6dbb6-n764r to test-dev-pool-35jcv
Normal Started 69s kubelet, test-dev-pool-35jcv Started container event
Normal Created 69s kubelet, test-dev-pool-35jcv Created container event
Normal Pulled 69s kubelet, test-dev-pool-35jcv Container image "registry.developers.crunchydata.com/crunchydata/pgo-event:centos7-4.5.0" already present on machine
Normal Started 68s (x2 over 69s) kubelet, test-dev-pool-35jcv Started container scheduler
Normal Created 68s (x2 over 69s) kubelet, test-dev-pool-35jcv Created container scheduler
Normal Pulled 68s (x2 over 69s) kubelet, test-dev-pool-35jcv Container image "registry.developers.crunchydata.com/crunchydata/pgo-scheduler:centos7-4.5.0" already present on machine
Normal Started 64s (x2 over 69s) kubelet, test-dev-pool-35jcv Started container operator
Normal Created 64s (x2 over 70s) kubelet, test-dev-pool-35jcv Created container operator
Normal Pulled 64s (x2 over 70s) kubelet, test-dev-pool-35jcv Container image "registry.developers.crunchydata.com/crunchydata/postgres-operator:centos7-4.5.0" already present on machine
Normal Started 64s (x2 over 70s) kubelet, test-dev-pool-35jcv Started container apiserver
Normal Created 64s (x2 over 70s) kubelet, test-dev-pool-35jcv Created container apiserver
Normal Pulled 64s (x2 over 70s) kubelet, test-dev-pool-35jcv Container image "registry.developers.crunchydata.com/crunchydata/pgo-apiserver:centos7-4.5.0" already present on machine
Warning BackOff 63s (x4 over 67s) kubelet, test-dev-pool-35jcv Back-off restarting failed container
Any ideas?
Edit: Solved! I was specifying the default storage incorrectly. The proper edits are
- name: BACKREST_STORAGE
value: "digitalocean"
- name: BACKUP_STORAGE
value: "digitalocean"
- name: PRIMARY_STORAGE
value: "digitalocean"
- name: REPLICA_STORAGE
value: "digitalocean"
- name: STORAGE5_NAME
value: "digitalocean"
- name: STORAGE5_ACCESS_MODE
value: "ReadWriteOnce"
- name: STORAGE5_SIZE
value: "1Gi"
- name: STORAGE5_TYPE
value: "dynamic"
- name: STORAGE5_CLASS
value: "do-block-storage"
See this GitHub issue for how to correctly format the file for DO.

I can't get into the azure kubernetes pod

I want to check inside the pod the flaw I'm having inside the image but the errors below appear. As I verified in describe, the name of the container is correct. What else can I do to get the connection in the cluster?
Command: kubectl exec -it -c airflow-console -n airflow airflow-console-xxxxxxx-xxxxx bash
error: unable to upgrade connection: container not found ("airflow-console")
Command: kubectl describe pod/airflow-console-xxxxxxx-xxxxx -n airflow
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 37m default-scheduler Successfully assigned airflow/airflow-console-xxxxxxx-xxxxx to aks-test
Normal Pulling 37m kubelet, aks-test Pulling image "test.azurecr.io/airflow:2"
Normal Pulled 37m kubelet, aks-test Successfully pulled image "test.azurecr.io/airflow:2"
Warning BackOff 36m kubelet, aks-test Back-off restarting failed container
Normal Pulled 36m (x3 over 37m) kubelet, aks-test Container image "k8s.gcr.io/git-sync:v3.1.2" already present on machine
Normal Created 36m (x3 over 37m) kubelet, aks-test Created container git-sync
Normal Started 36m (x3 over 37m) kubelet, aks-test Started container git-sync
Normal Created 36m (x3 over 36m) kubelet, aks-test Created container airflow-console
Normal Pulled 36m (x2 over 36m) kubelet, aks-test Container image "test.azurecr.io/airflow:2" already present on machine
Normal Started 36m (x3 over 36m) kubelet, aks-test Started container airflow-console
Warning BackOff 2m15s (x178 over 36m) kubelet, aks-test Back-off restarting failed container
this line
Warning BackOff 2m15s (x178 over 36m) kubelet, aks-test Back-off restarting failed container
shows that your pod/container is in a failed state. This will prevent you from execution commands in the container due to it not being alive.
To learn why your pod/container is in a bad state, you should look at the logs of the failed container
kubectl logs -n airflow airflow-console-xxxxxxx-xxxxx -c airflow-console
or the logs off the previous container that failed. (sometimes it helps)
kubectl logs -n airflow airflow-console-xxxxxxx-xxxxx -c airflow-console -p
This explain the main reason why a user cannot exec into a container.

How to fix ImagePullBackOff with Kubernetespod?

I created a pod 5 hours ago.Now I have error:Pull Back Off
These are events from describe pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4h51m default-scheduler Successfully assigned default/nodehelloworld.example.com to minikube
Normal Pulling 4h49m (x4 over 4h51m) kubelet, minikube pulling image "milenkom/docker-demo"
Warning Failed 4h49m (x4 over 4h51m) kubelet, minikube Failed to pull image "milenkom/docker-demo": rpc error: code = Unknown desc = Error response from daemon: manifest for milenkom/docker-demo:latest not found
Warning Failed 4h49m (x4 over 4h51m) kubelet, minikube Error: ErrImagePull
Normal BackOff 4h49m (x6 over 4h51m) kubelet, minikube Back-off pulling image "milenkom/docker-demo"
Warning Failed 4h21m (x132 over 4h51m) kubelet, minikube Error: ImagePullBackOff
Warning FailedMount 5m13s kubelet, minikube MountVolume.SetUp failed for volume "default-token-zpl2j" : couldn't propagate object cache: timed out waiting for the condition
Normal Pulling 3m34s (x4 over 5m9s) kubelet, minikube pulling image "milenkom/docker-demo"
Warning Failed 3m32s (x4 over 5m2s) kubelet, minikube Failed to pull image "milenkom/docker-demo": rpc error: code = Unknown desc = Error response from daemon: manifest for milenkom/docker-demo:latest not found
Warning Failed 3m32s (x4 over 5m2s) kubelet, minikube Error: ErrImagePull
Normal BackOff 3m5s (x6 over 5m1s) kubelet, minikube Back-off pulling image "milenkom/docker-demo"
Warning Failed 3m5s (x6 over 5m1s) kubelet, minikube Error: ImagePullBackOff
Images on my desktop
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
milenkom/docker-demo tagname 08d27ff00255 6 hours ago 659MB
Following advices from Max and Shanica I made a mess when tagging
docker tag 08d27ff00255 docker-demo:latest
Works OK,but when I try
docker push docker-demo:latest
The push refers to repository [docker.io/library/docker-demo]
e892b52719ff: Preparing
915b38bfb374: Preparing
3f1416a1e6b9: Preparing
e1da644611ce: Preparing
d79093d63949: Preparing
87cbe568afdd: Waiting
787c930753b4: Waiting
9f17712cba0b: Waiting
223c0d04a137: Waiting
fe4c16cbf7a4: Waiting
denied: requested access to the resource is denied
although I am logged in
Output docker inspect image 08d27ff00255
[
{
"Id": "sha256:08d27ff0025581727ef548437fce875d670f9e31b373f00c2a2477f8effb9816",
"RepoTags": [
"docker-demo:latest",
"milenkom/docker-demo:tagname"
],
Why does it fail assigning pod now?
manifest for milenkom/docker-demo:latest not found
Looks like there's no latest tag in the image you want to pull: https://hub.docker.com/r/milenkom/docker-demo/tags.
Try some existing image.
UPD (based on question update):
docker push milenkom/docker-demo:tagname
update k8s pod to point to milenkom/docker-demo:tagname

Kubernetes pod deployment error FailedSync| Error syncing pod

Env:
Vbox on a windows 10 desktop machine
Two ubuntu VMs, one VM is master and the other one is k8s(1.7) worker.
I can see two nodes are "ready" when get nodes. But even deploy a very simple nginx pod, I got the error message from pod describe
"norm | SandboxChanged |Pod sandbox changed, it will be killed and re-created." and "warning | FailedSync| Error syncing pod".
But if I run the docker container directly on the worker, the container can be up and running. Anyone has a suggestion what I can check for?
k8s-master#k8smaster-VirtualBox:~$ **kubectl get pods** NAME
READY STATUS RESTARTS AGE
movie-server-1517284798-lbb01 0/1 CrashLoopBackOff 6
16m
k8s-master#k8smaster-VirtualBox:~$ **kubectl describe pod
movie-server-1517284798-lbb01**
--- clip --- kubelet, master-virtualbox spec.containers{movie-server} Warning FailedError: failed to start
container "movie-server": Error response from daemon:
{"message":"cannot join network of a non running container:
3f59947dbd404ecf2f6dd0b65dd9dad8b25bf0c418aceb8cf666ad0761402b53"}
kubelet, master-virtualbox spec.containers{movie-server}
Warning BackOffBack-off restarting failed container
kubelet, master-virtualbox Normal
SandboxChanged Pod sandbox changed, it will be killed and
re-created.
kubelet, master-virtualbox spec.containers{movie-server} Normal
PulledContainer image "nancyfeng/movie-server:0.1.0" already present
on machine
kubelet, master-virtualbox spec.containers{movie-server}
Normal CreatedCreated container
kubelet, master-virtualbox
Warning FailedSync Error syncing pod
kubelet, master-virtualbox spec.containers{movie-server}
Warning FailedError: failed to start container "movie-server": Error
response from daemon: {"message":"cannot join network of a non running
container:
72ba77b25b6a3969e8921214f0ca73ffaab4c82d8a2852e3d1b1f3ac5dde6ce1"}
--- clip ---