I am writing a simple YAML file to apply liveness probe using a TCP port on Centos.6
I pulled a centos:6 image from public repository
started a container using the image.
installed mysql, and started it to verify a opened port (3306)
committed to local repository as "mysql-test:v0.1"
and apply a pod as below
apiVersion: v1
kind: Pod
metadata:
labels:
test: mysql-test
name: mysql-test-exec
spec:
containers:
- name: mysql-test
args:
- /sbin/service
- mysqld
- start
image: mysql-test:v0.1
livenessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 20
But, the status of the pod is CrashLoopBackOff, and the status of the container on work02 is Exited.
1) master node
root#kubeadm-master01:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-test-exec 0/1 CrashLoopBackOff 6 8m
root#kubeadm-master01:~# kubectl describe pod mysql-test-exec
.
.
.
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 default-scheduler Normal Scheduled Successfully assigned mysql-test-exec to kubeadm-work02
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Created Created container with id abbad6585021151cd86fdfb3a9f733245f603686c90f533f23
44397c97c36918
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Started Started container with id abbad6585021151cd86fdfb3a9f733245f603686c90f533f23
44397c97c36918
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Started Started container with id a1062083089eed109fe8f41344136631bb9d4c08a2c6454dc7
7f677f01a48666
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Created Created container with id a1062083089eed109fe8f41344136631bb9d4c08a2c6454dc7
7f677f01a48666
1m 1m 3 kubelet, kubeadm-work02 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysql-test" wit
h CrashLoopBackOff: "Back-off 10s restarting failed container=mysql-test pod=mysql-test-exec_default(810c37bd-7a8c-11e7-9224-525400603584)"
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Created Created container with id 79512aeaf8a6b4692e11b344adb24763343bb2a06c9003222097962822d42202
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Started Started container with id 79512aeaf8a6b4692e11b344adb24763343bb2a06c9003222097962822d42202
1m 43s 3 kubelet, kubeadm-work02 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysql-test" with CrashLoopBackOff: "Bac
k-off 20s restarting failed container=mysql-test pod=mysql-test-exec_default(810c37bd-7a8c-11e7-9224-525400603584)"
29s 29s 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Started Started container with id 4427a3b8e5320b284ac764c1152def4ba749e4f656b3c464a472514bccf2e30e
1m 29s 4 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Pulled Container image "centos-mysql:v0.1" already present on machine
29s 29s 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Created Created container with id 4427a3b8e5320b284ac764c1152def4ba749e4f656b3c464a472514bccf2e30e
1m 10s 9 kubelet, kubeadm-work02 spec.containers{mysql-test} Warning BackOff Back-off restarting failed container
27s 10s 3 kubelet, kubeadm-work02 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysql-test" with CrashLoopBackOff: "Bac
k-off 40s restarting failed container=mysql-test pod=mysql-test-exec_default(810c37bd-7a8c-11e7-9224-525400603584)"
2) work node
root#kubeadm-work02:~# docker logs f64e20bf33a8
Starting mysqld: [ OK ]
I have to remove the args to work with docker image. below deployment works for me.
apiVersion: v1
kind: Pod
metadata:
labels:
test: mysql-test
name: mysql-test-exec
spec:
containers:
- name: mysql-test
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: mysql456
livenessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 20
Related
So I have this unhealthy cluster partially working in the datacenter. This is probably the 10th time I have rebuilt from the instructions at: https://kubernetes.io/docs/setup/independent/high-availability/
I can apply some pods to this cluster and it seems to work but eventually it starts slowing down and crashing as you can see below. Here is the scheduler manifest:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
image: k8s.gcr.io/kube-scheduler:v1.14.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
status: {}
$ kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-42psn 1/1 Running 9 88m
coredns-fb8b8dccf-x9mlt 1/1 Running 11 88m
docker-registry-dqvzb 1/1 Running 1 2d6h
kube-apiserver-kube-apiserver-1 1/1 Running 44 2d8h
kube-apiserver-kube-apiserver-2 1/1 Running 34 2d7h
kube-controller-manager-kube-apiserver-1 1/1 Running 198 2d2h
kube-controller-manager-kube-apiserver-2 0/1 CrashLoopBackOff 170 2d7h
kube-flannel-ds-amd64-4mbfk 1/1 Running 1 2d7h
kube-flannel-ds-amd64-55hc7 1/1 Running 1 2d8h
kube-flannel-ds-amd64-fvwmf 1/1 Running 1 2d7h
kube-flannel-ds-amd64-ht5wm 1/1 Running 3 2d7h
kube-flannel-ds-amd64-rjt9l 1/1 Running 4 2d8h
kube-flannel-ds-amd64-wpmkj 1/1 Running 1 2d7h
kube-proxy-2n64d 1/1 Running 3 2d7h
kube-proxy-2pq2g 1/1 Running 1 2d7h
kube-proxy-5fbms 1/1 Running 2 2d8h
kube-proxy-g8gmn 1/1 Running 1 2d7h
kube-proxy-wrdrj 1/1 Running 1 2d8h
kube-proxy-wz6gv 1/1 Running 1 2d7h
kube-scheduler-kube-apiserver-1 0/1 CrashLoopBackOff 198 2d2h
kube-scheduler-kube-apiserver-2 1/1 Running 5 18m
nginx-ingress-controller-dz8fm 1/1 Running 3 2d4h
nginx-ingress-controller-sdsgg 1/1 Running 3 2d4h
nginx-ingress-controller-sfrgb 1/1 Running 1 2d4h
$ kubectl -n kube-system describe pod kube-scheduler-kube-apiserver-1
Containers:
kube-scheduler:
Container ID: docker://c04f3c9061cafef8749b2018cd66e6865d102f67c4d13bdd250d0b4656d5f220
Image: k8s.gcr.io/kube-scheduler:v1.14.2
Image ID: docker-pullable://k8s.gcr.io/kube-scheduler#sha256:052e0322b8a2b22819ab0385089f202555c4099493d1bd33205a34753494d2c2
Port: <none>
Host Port: <none>
Command:
kube-scheduler
--bind-address=127.0.0.1
--kubeconfig=/etc/kubernetes/scheduler.conf
--authentication-kubeconfig=/etc/kubernetes/scheduler.conf
--authorization-kubeconfig=/etc/kubernetes/scheduler.conf
--leader-elect=true
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 28 May 2019 23:16:50 -0400
Finished: Tue, 28 May 2019 23:19:56 -0400
Ready: False
Restart Count: 195
Requests:
cpu: 100m
Liveness: http-get http://127.0.0.1:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/scheduler.conf
HostPathType: FileOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 4h56m (x104 over 37h) kubelet, kube-apiserver-1 Created container kube-scheduler
Normal Started 4h56m (x104 over 37h) kubelet, kube-apiserver-1 Started container kube-scheduler
Warning Unhealthy 137m (x71 over 34h) kubelet, kube-apiserver-1 Liveness probe failed: Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
Normal Pulled 132m (x129 over 37h) kubelet, kube-apiserver-1 Container image "k8s.gcr.io/kube-scheduler:v1.14.2" already present on machine
Warning BackOff 128m (x1129 over 34h) kubelet, kube-apiserver-1 Back-off restarting failed container
Normal SandboxChanged 80m kubelet, kube-apiserver-1 Pod sandbox changed, it will be killed and re-created.
Warning Failed 76m kubelet, kube-apiserver-1 Error: context deadline exceeded
Normal Pulled 36m (x7 over 78m) kubelet, kube-apiserver-1 Container image "k8s.gcr.io/kube-scheduler:v1.14.2" already present on machine
Normal Started 36m (x6 over 74m) kubelet, kube-apiserver-1 Started container kube-scheduler
Normal Created 32m (x7 over 74m) kubelet, kube-apiserver-1 Created container kube-scheduler
Warning Unhealthy 20m (x9 over 40m) kubelet, kube-apiserver-1 Liveness probe failed: Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
Warning BackOff 2m56s (x85 over 69m) kubelet, kube-apiserver-1 Back-off restarting failed container
I feel like I am overlooking a simple option or configuration but I can't find it and after days of dealing with this problem and reading documentation I am at my wits end.
The load balancer is a TCP load balancer and seems to be working as expected as I can query the cluster from my desktop.
Any suggestions or troubleshooting tips are definitely welcome at this time.
Thank you.
The problem with our configuration was that a well intended technician decided to eliminate one of the rules on the kubernetes master firewall which prevented the master from looping back to ports it needed to probe. This caused all kinds of weird issues and misdiagnosed problems which was definitely the wrong direction. After we allowed all ports on the servers Kubernetes was back to its normal behavior.
I've built a docker image within the minikube VM. However I don't understand why Kubernetes is not finding it?
minikube ssh
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
diyapploopback latest 9590c4dc2ed1 2 hours ago 842MB
And if I describe the pod:
kubectl describe pods abcxyz12-6b4d85894-fhb2p
Name: abcxyz12-6b4d85894-fhb2p
Namespace: diyclientapps
Node: minikube/192.168.99.100
Start Time: Wed, 07 Mar 2018 13:49:51 +0000
Labels: appId=abcxyz12
pod-template-hash=260841450
Annotations: <none>
Status: Pending
IP: 172.17.0.6
Controllers: <none>
Containers:
nginx:
Container ID:
Image: diyapploopback:latest
Image ID:
Port: 80/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c62fx (ro)
mariadb:
Container ID: docker://fe09e08f98a9f972f2d086b56b55982e96772a2714ad3b4c2adf4f2f06c2986a
Image: mariadb:10.3
Image ID: docker-pullable://mariadb#sha256:8d4b8fd12c86f343b19e29d0fdd0c63a7aa81d4c2335317085ac973a4782c1f5
Port:
State: Running
Started: Wed, 07 Mar 2018 14:21:00 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 07 Mar 2018 13:49:54 +0000
Finished: Wed, 07 Mar 2018 14:18:43 +0000
Ready: True
Restart Count: 1
Environment:
MYSQL_ROOT_PASSWORD: passwordTempXyz
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c62fx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: abcxyz12
ReadOnly: false
default-token-c62fx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c62fx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
31m 31m 1 default-scheduler Normal Scheduled Successfully assigned abcxyz12-6b4d85894-fhb2p to minikube
31m 31m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-c62fx"
31m 31m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "pvc-689f3067-220e-11e8-a244-0800279a9a04"
31m 31m 1 kubelet, minikube spec.containers{mariadb} Normal Pulled Container image "mariadb:10.3" already present on machine
31m 31m 1 kubelet, minikube spec.containers{mariadb} Normal Created Created container
31m 31m 1 kubelet, minikube spec.containers{mariadb} Normal Started Started container
31m 30m 3 kubelet, minikube spec.containers{nginx} Warning Failed Failed to pull image "diyapploopback:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for diyapploopback, repository does not exist or may require 'docker login'
31m 30m 3 kubelet, minikube spec.containers{nginx} Warning Failed Error: ErrImagePull
31m 29m 4 kubelet, minikube spec.containers{nginx} Normal Pulling pulling image "diyapploopback:latest"
31m 16m 63 kubelet, minikube spec.containers{nginx} Normal BackOff Back-off pulling image "diyapploopback:latest"
31m 6m 105 kubelet, minikube spec.containers{nginx} Warning Failed Error: ImagePullBackOff
21s 21s 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "pvc-689f3067-220e-11e8-a244-0800279a9a04"
20s 20s 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-c62fx"
20s 20s 1 kubelet, minikube Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
17s 17s 1 kubelet, minikube spec.containers{nginx} Warning Failed Failed to pull image "diyapploopback:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for diyapploopback, repository does not exist or may require 'docker login'
17s 17s 1 kubelet, minikube spec.containers{nginx} Warning Failed Error: ErrImagePull
17s 17s 1 kubelet, minikube spec.containers{mariadb} Normal Pulled Container image "mariadb:10.3" already present on machine
17s 17s 1 kubelet, minikube spec.containers{mariadb} Normal Created Created container
16s 16s 1 kubelet, minikube spec.containers{mariadb} Normal Started Started container
16s 15s 2 kubelet, minikube spec.containers{nginx} Normal BackOff Back-off pulling image "diyapploopback:latest"
16s 15s 2 kubelet, minikube spec.containers{nginx} Warning Failed Error: ImagePullBackOff
19s 1s 2 kubelet, minikube spec.containers{nginx} Normal Pulling pulling image "diyapploopback:latest"
Seems I'm able to run it directly (only for debugging/diagnoses purposes..):
kubectl run abcxyz123 --image=diyapploopback --image-pull-policy=Never
If I describe the above deployment/container I get:
Name: abcxyz123-6749977548-stvsm
Namespace: diyclientapps
Node: minikube/192.168.99.100
Start Time: Wed, 07 Mar 2018 14:26:33 +0000
Labels: pod-template-hash=2305533104
run=abcxyz123
Annotations: <none>
Status: Running
IP: 172.17.0.9
Controllers: <none>
Containers:
abcxyz123:
Container ID: docker://c9b71667feba21ef259a395c9b8504e3e4968e5b9b35a191963f0576d0631d11
Image: diyapploopback
Image ID: docker://sha256:9590c4dc2ed16cb70a21c3385b7e0519ad0b1fece79e343a19337131600aa866
Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 07 Mar 2018 14:42:45 +0000
Finished: Wed, 07 Mar 2018 14:42:48 +0000
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c62fx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-c62fx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c62fx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
17m 17m 1 default-scheduler Normal Scheduled Successfully assigned abcxyz123-6749977548-stvsm to minikube
17m 17m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-c62fx"
17m 15m 5 kubelet, minikube spec.containers{abcxyz123} Normal Pulled Container image "diyapploopback" already present on machine
17m 15m 5 kubelet, minikube spec.containers{abcxyz123} Normal Created Created container
17m 15m 5 kubelet, minikube spec.containers{abcxyz123} Normal Started Started container
16m 1m 66 kubelet, minikube spec.containers{abcxyz123} Warning BackOff Back-off restarting failed container
imagePullPolicy: IfNotPresent
The above was not present (and it is required...) in my image config within the deployment...
This repository kubernetes-wordpress-with-nfs-volume-on-gke is trying to implement a wordpress application that shares an NFS volume between mySQL and wordpress. The idea behind sharing a NFS volume between pods is to implement in the next step a StatefulSet for mySQL. This StatefulSet application will need to share the database (the volume of the database) between all the pods of mySQL so that a multi node database is created that ensures the requested high performance.
To do that, there is an example janakiramm/wp-statefulset. This example is using etcd. So why not using nfs in stead of etcd?
The commands to run to create this kubernetes wordpress application that shared the NFS volume between MySQL and wordpress are:
kubectl create -f 01-pv-gce.yml
kubectl create -f 02-dep-nfs.yml
kubectl create -f 03-srv-nfs.yml
kubectl get services # you have to update the file 04-pv-pvc with the new IP address of the service
kubectl create -f 04-pv-pvc.yml
kubectl create -f 05-mysql.yml
kubectl create -f 06-wordpress.yml
This implementation did not succeed. The wordpress pod is not starting:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-2899972627-jgjx0 1/1 Running 0 4m
wp01-mysql-1941769936-m9jjd 1/1 Running 0 3m
wp01-wordpress-2362719074-bv53t 0/1 CrashLoopBackOff 4 2m
It seems to be that there is a problem to access to NFS volume as described below:
$ kubectl describe pods wp01-wordpress-2362719074-bv53t
Name: wp01-wordpress-2362719074-bv53t
Namespace: default
Node: gke-mappedinn-cluster-default-pool-6264f94a-z0sh/10.240.0.4
Start Time: Thu, 04 May 2017 05:59:12 +0400
Labels: app=wp01
pod-template-hash=2362719074
tier=frontend
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"wp01-wordpress-2362719074","uid":"44b91da0-306d-11e7-a0d1-42010a...
kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container wordpress
Status: Running
IP: 10.244.0.4
Controllers: ReplicaSet/wp01-wordpress-2362719074
Containers:
wordpress:
Container ID: docker://658c7392c1b7a5033fe1a1b456a9653161003ee2878a4f02c6a12abb49241d47
Image: wordpress:4.6.1-apache
Image ID: docker://sha256:ee397259d4e59c65e2c1c5979a3634eb3ab106bba389acea8b21862053359134
Port: 80/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 04 May 2017 06:03:16 +0400
Finished: Thu, 04 May 2017 06:03:16 +0400
Ready: False
Restart Count: 5
Requests:
cpu: 100m
Environment:
WORDPRESS_DB_HOST: wp01-mysql
WORDPRESS_DB_PASSWORD: <set to the key 'password' in secret 'wp01-pwd-wordpress'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-k650h (ro)
/var/www/html from wordpress-persistent-storage (rw)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
wordpress-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: wp01-pvc-data
ReadOnly: false
default-token-k650h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-k650h
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 5m 1 default-scheduler Normal Scheduled Successfully assigned wp01-wordpress-2362719074-bv53t to gke-mappedinn-cluster-default-pool-6264f94a-z0sh
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Pulling pulling image "wordpress:4.6.1-apache"
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Pulled Successfully pulled image "wordpress:4.6.1-apache"
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 8647e997d6f4; Security:[seccomp=unconfined]
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 8647e997d6f4
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 37f4f0fd392d; Security:[seccomp=unconfined]
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 37f4f0fd392d
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 10s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id b78a661388a2; Security:[seccomp=unconfined]
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id b78a661388a2
3m 3m 2 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 20s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 2b6384407678; Security:[seccomp=unconfined]
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 2b6384407678
3m 2m 4 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 40s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
2m 2m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 930a3410b213; Security:[seccomp=unconfined]
2m 2m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 930a3410b213
2m 1m 7 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
4m 1m 5 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Pulled Container image "wordpress:4.6.1-apache" already present on machine
1m 1m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 658c7392c1b7; Security:[seccomp=unconfined]
1m 1m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 658c7392c1b7
4m 10s 19 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Warning BackOff Back-off restarting failed docker container
1m 10s 5 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh
Could you please help on that issue?
after create yaml in image:fernandoacorreia/ubuntu-14.04-oracle-java-1.7,pod is not running
Kubernetes version (use kubectl version):
1.5.2
Environment:
--ENV:K8S 1.5.2 DOCKER 1.13.1 OS UBUNTU16.04
--already finish pull image
docker pull fernandoacorreia/ubuntu-14.04-oracle-java-1.7
What happened:
step1:
vi ubuntu_ora.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ubuntu-ora
spec:
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ununtu
image: fernandoacorreia/ubuntu-14.04-oracle-java-1.7
ports:
- containerPort: 80
step2:
kubectl create -f ubuntu_ora.yaml
step3:
openinstall#k8master:~$ kubectl get pods --all-namespaces|grep ubuntu-ora
NAMESPACE NAME READY STATUS RESTARTS AGE
default ubuntu-ora-4001744982-pvjwv 0/1 CrashLoopBackOff 1 10s
step4:
kubectl describe pod ubuntu-ora-4001744982-pvjwv
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
45s 45s 1 {default-scheduler } Normal Scheduled Successfully assigned ubuntu-ora-4001744982-pvjwv to k8node3
38s 38s 1 {kubelet k8node3} spec.containers{ununtu} Normal Created Created container with docker id 0c14684efe67; Security:[seccomp=unconfined]
38s 38s 1 {kubelet k8node3} spec.containers{ununtu} Normal Started Started container with docker id 0c14684efe67
36s 36s 1 {kubelet k8node3} spec.containers{ununtu} Normal Created Created container with docker id d0df08a7f2c9; Security:[seccomp=unconfined]
36s 36s 1 {kubelet k8node3} spec.containers{ununtu} Normal Started Started container with docker id d0df08a7f2c9
35s 34s 2 {kubelet k8node3} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ununtu" with CrashLoopBackOff: "Back-off 10s restarting failed container=ununtu pod=ubuntu-ora-4001744982-pvjwv_default(4f739fd7-fe78-11e6-96e3-7824af3fe739)"
44s 18s 3 {kubelet k8node3} spec.containers{ununtu} Normal Pulling pulling image "fernandoacorreia/ubuntu-14.04-oracle-java-1.7"
38s 17s 3 {kubelet k8node3} spec.containers{ununtu} Normal Pulled Successfully pulled image "fernandoacorreia/ubuntu-14.04-oracle-java-1.7"
17s 17s 1 {kubelet k8node3} spec.containers{ununtu} Normal Created Created container with docker id 89faf423f478; Security:[seccomp=unconfined]
17s 17s 1 {kubelet k8node3} spec.containers{ununtu} Normal Started Started container with docker id 89faf423f478
35s 3s 4 {kubelet k8node3} spec.containers{ununtu} Warning BackOff Back-off restarting failed docker container
16s 3s 2 {kubelet k8node3} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ununtu" with CrashLoopBackOff: "Back-off 20s restarting failed container=ununtu pod=ubuntu-ora-4001744982-pvjwv_default(4f739fd7-fe78-11e6-96e3-7824af3fe739)"
step5:
docker ps -a|grep ubun
b708ebc0ebcb fernandoacorreia/ubuntu-14.04-oracle-java-1.7 "java" About a minute ago Exited (1) About a minute ago k8s_ununtu.94603cd2_ubuntu-ora-4001744982-pvjwv_default_4f739fd7-fe78-11e6-96e3-7824af3fe739_668ca1c5
ec287bb33333 gcr.io/google_containers/pause-amd64:3.0 "/pause" 4 minutes ago Up 4 minutes k8s_POD.d8dbe16c_ubuntu-ora-4001744982-pvjwv_default_4f739fd7-fe78-11e6-96e3-7824af3fe739_afa16f4c
you see to:
https://github.com/kubernetes/kubernetes/issues/42377
I just installed a kubernetes local cluster, but when I tried the command
cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80
to create and run pods, here is what I got:
NAME READY STATUS RESTARTS AGE
my-nginx-00t7f 0/1 ContainerCreating 0 23m
my-nginx-spy2b 0/1 ContainerCreating 0 23m
and I used kubectl logs, I got
Pod "my-nginx-00t7f" in namespace "default" : pod is not in 'Running', 'Succeeded' or 'Failed' state - State: "Pending"
Seems it got stuck in 'pending' status.
Then I used 'kubectl describe' and got
Name: my-nginx-00t7f
Namespace: default
Image(s): nginx
Node: 127.0.0.1/127.0.0.1
Start Time: Thu, 17 Dec 2015 22:27:18 +0800
Labels: run=my-nginx
Status: Pending
Reason:
Message:
IP:
Replication Controllers: my-nginx (2/2 replicas created)
Containers:
my-nginx:
Container ID:
Image: nginx
Image ID:
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-p09p6:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-p09p6
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
26m 26m 1 {scheduler } Normal Scheduled Successfully assigned my-nginx-00t7f to 127.0.0.1
22m 1m 79 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: ImagePullBackOff
24m 5s 8 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: ErrImagePull
It seems my docker can not pull images, but actually it can, there is no problem when I docker pull nginx.
I assume that you figured out that it was the pause container that couldn't be pulled from the Kubelet logs.
Kubernetes needs to create a container for the pod in order to hold shared resources, such as the network namespace. It uses the pause container for this, which is a very small container that just sleeps forever.
If your container remains in pending status then please check the kube-schedular services. If its stopped state, turn it on and check.