NFS volume sharing issue between wordpress pod and mysql pod - kubernetes

This repository kubernetes-wordpress-with-nfs-volume-on-gke is trying to implement a wordpress application that shares an NFS volume between mySQL and wordpress. The idea behind sharing a NFS volume between pods is to implement in the next step a StatefulSet for mySQL. This StatefulSet application will need to share the database (the volume of the database) between all the pods of mySQL so that a multi node database is created that ensures the requested high performance.
To do that, there is an example janakiramm/wp-statefulset. This example is using etcd. So why not using nfs in stead of etcd?
The commands to run to create this kubernetes wordpress application that shared the NFS volume between MySQL and wordpress are:
kubectl create -f 01-pv-gce.yml
kubectl create -f 02-dep-nfs.yml
kubectl create -f 03-srv-nfs.yml
kubectl get services # you have to update the file 04-pv-pvc with the new IP address of the service
kubectl create -f 04-pv-pvc.yml
kubectl create -f 05-mysql.yml
kubectl create -f 06-wordpress.yml
This implementation did not succeed. The wordpress pod is not starting:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-2899972627-jgjx0 1/1 Running 0 4m
wp01-mysql-1941769936-m9jjd 1/1 Running 0 3m
wp01-wordpress-2362719074-bv53t 0/1 CrashLoopBackOff 4 2m
It seems to be that there is a problem to access to NFS volume as described below:
$ kubectl describe pods wp01-wordpress-2362719074-bv53t
Name: wp01-wordpress-2362719074-bv53t
Namespace: default
Node: gke-mappedinn-cluster-default-pool-6264f94a-z0sh/10.240.0.4
Start Time: Thu, 04 May 2017 05:59:12 +0400
Labels: app=wp01
pod-template-hash=2362719074
tier=frontend
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"wp01-wordpress-2362719074","uid":"44b91da0-306d-11e7-a0d1-42010a...
kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container wordpress
Status: Running
IP: 10.244.0.4
Controllers: ReplicaSet/wp01-wordpress-2362719074
Containers:
wordpress:
Container ID: docker://658c7392c1b7a5033fe1a1b456a9653161003ee2878a4f02c6a12abb49241d47
Image: wordpress:4.6.1-apache
Image ID: docker://sha256:ee397259d4e59c65e2c1c5979a3634eb3ab106bba389acea8b21862053359134
Port: 80/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 04 May 2017 06:03:16 +0400
Finished: Thu, 04 May 2017 06:03:16 +0400
Ready: False
Restart Count: 5
Requests:
cpu: 100m
Environment:
WORDPRESS_DB_HOST: wp01-mysql
WORDPRESS_DB_PASSWORD: <set to the key 'password' in secret 'wp01-pwd-wordpress'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-k650h (ro)
/var/www/html from wordpress-persistent-storage (rw)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
wordpress-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: wp01-pvc-data
ReadOnly: false
default-token-k650h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-k650h
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 5m 1 default-scheduler Normal Scheduled Successfully assigned wp01-wordpress-2362719074-bv53t to gke-mappedinn-cluster-default-pool-6264f94a-z0sh
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Pulling pulling image "wordpress:4.6.1-apache"
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Pulled Successfully pulled image "wordpress:4.6.1-apache"
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 8647e997d6f4; Security:[seccomp=unconfined]
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 8647e997d6f4
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 37f4f0fd392d; Security:[seccomp=unconfined]
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 37f4f0fd392d
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 10s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id b78a661388a2; Security:[seccomp=unconfined]
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id b78a661388a2
3m 3m 2 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 20s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 2b6384407678; Security:[seccomp=unconfined]
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 2b6384407678
3m 2m 4 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 40s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
2m 2m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 930a3410b213; Security:[seccomp=unconfined]
2m 2m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 930a3410b213
2m 1m 7 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
4m 1m 5 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Pulled Container image "wordpress:4.6.1-apache" already present on machine
1m 1m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 658c7392c1b7; Security:[seccomp=unconfined]
1m 1m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 658c7392c1b7
4m 10s 19 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Warning BackOff Back-off restarting failed docker container
1m 10s 5 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh
Could you please help on that issue?

Related

Tiller pod crashes after Vagrant VM is powered off

I have set up a Vagrant VM, and installed Kubernetes and Helm.
vagrant#vagrant:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-19T00:05:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.8", GitCommit:"c138b85178156011dc934c2c9f4837476876fb07", GitTreeState:"clean", BuildDate:"2018-05-21T18:53:18Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
vagrant#vagrant:~$ helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
After the first vagrant up that creates the VM, Tiller has no issues.
I power-off the VM with vagrant halt and reactivate it with vagrant up. Then Tiller starts to misbehave.
It has a lot of restarts and at some point, it enters a ClashLoopBackOff state.
etcd-vagrant 1/1 Running 2 1h
heapster-5449cf95bd-h9xk8 1/1 Running 2 1h
kube-apiserver-vagrant 1/1 Running 2 1h
kube-controller-manager-vagrant 1/1 Running 2 1h
kube-dns-6f4fd4bdf-xclbb 3/3 Running 6 1h
kube-proxy-8n8tc 1/1 Running 2 1h
kube-scheduler-vagrant 1/1 Running 2 1h
kubernetes-dashboard-5bd6f767c7-lrdjp 1/1 Running 3 1h
tiller-deploy-78f96d6f9-cswbm 0/1 CrashLoopBackOff 8 38m
weave-net-948jt 2/2 Running 5 1h
I get a look at the pod's events and see that the Liveness and Readiness probes are failing.
vagrant#vagrant:~$ kubectl describe pod tiller-deploy-78f96d6f9-cswbm -n kube-system
Name: tiller-deploy-78f96d6f9-cswbm
Namespace: kube-system
Node: vagrant/10.0.2.15
Start Time: Wed, 23 May 2018 08:51:54 +0000
Labels: app=helm
name=tiller
pod-template-hash=349528295
Annotations: <none>
Status: Running
IP: 10.32.0.28
Controlled By: ReplicaSet/tiller-deploy-78f96d6f9
Containers:
tiller:
Container ID: docker://389470b95c46f0a5ba6b4b5457f212b0e6f3e3a754beb1aeae835260de3790a7
Image: gcr.io/kubernetes-helm/tiller:v2.9.1
Image ID: docker-pullable://gcr.io/kubernetes-helm/tiller#sha256:417aae19a0709075df9cc87e2fcac599b39d8f73ac95e668d9627fec9d341af2
Ports: 44134/TCP, 44135/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Wed, 23 May 2018 09:26:53 +0000
Finished: Wed, 23 May 2018 09:27:12 +0000
Ready: False
Restart Count: 8
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-fl44z (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-fl44z:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-fl44z
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 38m kubelet, vagrant MountVolume.SetUp succeeded for volume "default-token-fl44z"
Normal Scheduled 38m default-scheduler Successfully assigned tiller-deploy-78f96d6f9-cswbm to vagrant
Normal Pulled 29m (x2 over 38m) kubelet, vagrant Container image "gcr.io/kubernetes-helm/tiller:v2.9.1" already present on machine
Normal Killing 29m kubelet, vagrant Killing container with id docker://tiller:Container failed liveness probe.. Container will be killed and recreated.
Normal Created 29m (x2 over 38m) kubelet, vagrant Created container
Normal Started 29m (x2 over 38m) kubelet, vagrant Started container
Warning Unhealthy 28m (x2 over 37m) kubelet, vagrant Readiness probe failed: Get http://10.32.0.19:44135/readiness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 17m (x30 over 37m) kubelet, vagrant Liveness probe failed: Get http://10.32.0.19:44135/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Normal SuccessfulMountVolume 11m kubelet, vagrant MountVolume.SetUp succeeded for volume "default-token-fl44z"
Warning FailedCreatePodSandBox 10m (x7 over 11m) kubelet, vagrant Failed create pod sandbox.
Normal SandboxChanged 10m (x8 over 11m) kubelet, vagrant Pod sandbox changed, it will be killed and re-created.
Normal Pulled 10m kubelet, vagrant Container image "gcr.io/kubernetes-helm/tiller:v2.9.1" already present on machine
Normal Created 10m kubelet, vagrant Created container
Normal Started 10m kubelet, vagrant Started container
Warning Unhealthy 10m kubelet, vagrant Liveness probe failed: Get http://10.32.0.28:44135/liveness: dial tcp 10.32.0.28:44135: getsockopt: connection refused
Warning Unhealthy 10m kubelet, vagrant Readiness probe failed: Get http://10.32.0.28:44135/readiness: dial tcp 10.32.0.28:44135: getsockopt: connection refused
Warning Unhealthy 8m (x2 over 9m) kubelet, vagrant Liveness probe failed: Get http://10.32.0.28:44135/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 8m (x2 over 9m) kubelet, vagrant Readiness probe failed: Get http://10.32.0.28:44135/readiness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning BackOff 1m (x22 over 7m) kubelet, vagrant Back-off restarting failed container
After entering this state, it stays there.
Only after I delete the Tiller pod, it comes up again and everything runs smoothly.
vagrant#vagrant:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
etcd-vagrant 1/1 Running 2 1h
heapster-5449cf95bd-h9xk8 1/1 Running 2 1h
kube-apiserver-vagrant 1/1 Running 2 1h
kube-controller-manager-vagrant 1/1 Running 2 1h
kube-dns-6f4fd4bdf-xclbb 3/3 Running 6 1h
kube-proxy-8n8tc 1/1 Running 2 1h
kube-scheduler-vagrant 1/1 Running 2 1h
kubernetes-dashboard-5bd6f767c7-lrdjp 1/1 Running 4 1h
tiller-deploy-78f96d6f9-tgx4z 1/1 Running 0 7m
weave-net-948jt 2/2 Running 5 1h
However, the events seem to have the same Unhealthy Warnings.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m default-scheduler Successfully assigned tiller-deploy-78f96d6f9-tgx4z to vagrant
Normal SuccessfulMountVolume 8m kubelet, vagrant MountVolume.SetUp succeeded for volume "default-token-fl44z"
Normal Pulled 7m kubelet, vagrant Container image "gcr.io/kubernetes-helm/tiller:v2.9.1" already present on machine
Normal Created 7m kubelet, vagrant Created container
Normal Started 7m kubelet, vagrant Started container
Warning Unhealthy 7m kubelet, vagrant Readiness probe failed: Get http://10.32.0.28:44135/readiness: dial tcp 10.32.0.28:44135: getsockopt: connection refused
Warning Unhealthy 7m kubelet, vagrant Liveness probe failed: Get http://10.32.0.28:44135/liveness: dial tcp 10.32.0.28:44135: getsockopt: connection refused
Warning Unhealthy 1m (x6 over 3m) kubelet, vagrant Liveness probe failed: Get http://10.32.0.28:44135/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 41s (x14 over 7m) kubelet, vagrant Readiness probe failed: Get http://10.32.0.28:44135/readiness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Any insight would be appreciated.

Kubernetes reports ImagePullBackOff for pod on minikube

I've built a docker image within the minikube VM. However I don't understand why Kubernetes is not finding it?
minikube ssh
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
diyapploopback latest 9590c4dc2ed1 2 hours ago 842MB
And if I describe the pod:
kubectl describe pods abcxyz12-6b4d85894-fhb2p
Name: abcxyz12-6b4d85894-fhb2p
Namespace: diyclientapps
Node: minikube/192.168.99.100
Start Time: Wed, 07 Mar 2018 13:49:51 +0000
Labels: appId=abcxyz12
pod-template-hash=260841450
Annotations: <none>
Status: Pending
IP: 172.17.0.6
Controllers: <none>
Containers:
nginx:
Container ID:
Image: diyapploopback:latest
Image ID:
Port: 80/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c62fx (ro)
mariadb:
Container ID: docker://fe09e08f98a9f972f2d086b56b55982e96772a2714ad3b4c2adf4f2f06c2986a
Image: mariadb:10.3
Image ID: docker-pullable://mariadb#sha256:8d4b8fd12c86f343b19e29d0fdd0c63a7aa81d4c2335317085ac973a4782c1f5
Port:
State: Running
Started: Wed, 07 Mar 2018 14:21:00 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 07 Mar 2018 13:49:54 +0000
Finished: Wed, 07 Mar 2018 14:18:43 +0000
Ready: True
Restart Count: 1
Environment:
MYSQL_ROOT_PASSWORD: passwordTempXyz
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c62fx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: abcxyz12
ReadOnly: false
default-token-c62fx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c62fx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
31m 31m 1 default-scheduler Normal Scheduled Successfully assigned abcxyz12-6b4d85894-fhb2p to minikube
31m 31m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-c62fx"
31m 31m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "pvc-689f3067-220e-11e8-a244-0800279a9a04"
31m 31m 1 kubelet, minikube spec.containers{mariadb} Normal Pulled Container image "mariadb:10.3" already present on machine
31m 31m 1 kubelet, minikube spec.containers{mariadb} Normal Created Created container
31m 31m 1 kubelet, minikube spec.containers{mariadb} Normal Started Started container
31m 30m 3 kubelet, minikube spec.containers{nginx} Warning Failed Failed to pull image "diyapploopback:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for diyapploopback, repository does not exist or may require 'docker login'
31m 30m 3 kubelet, minikube spec.containers{nginx} Warning Failed Error: ErrImagePull
31m 29m 4 kubelet, minikube spec.containers{nginx} Normal Pulling pulling image "diyapploopback:latest"
31m 16m 63 kubelet, minikube spec.containers{nginx} Normal BackOff Back-off pulling image "diyapploopback:latest"
31m 6m 105 kubelet, minikube spec.containers{nginx} Warning Failed Error: ImagePullBackOff
21s 21s 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "pvc-689f3067-220e-11e8-a244-0800279a9a04"
20s 20s 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-c62fx"
20s 20s 1 kubelet, minikube Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
17s 17s 1 kubelet, minikube spec.containers{nginx} Warning Failed Failed to pull image "diyapploopback:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for diyapploopback, repository does not exist or may require 'docker login'
17s 17s 1 kubelet, minikube spec.containers{nginx} Warning Failed Error: ErrImagePull
17s 17s 1 kubelet, minikube spec.containers{mariadb} Normal Pulled Container image "mariadb:10.3" already present on machine
17s 17s 1 kubelet, minikube spec.containers{mariadb} Normal Created Created container
16s 16s 1 kubelet, minikube spec.containers{mariadb} Normal Started Started container
16s 15s 2 kubelet, minikube spec.containers{nginx} Normal BackOff Back-off pulling image "diyapploopback:latest"
16s 15s 2 kubelet, minikube spec.containers{nginx} Warning Failed Error: ImagePullBackOff
19s 1s 2 kubelet, minikube spec.containers{nginx} Normal Pulling pulling image "diyapploopback:latest"
Seems I'm able to run it directly (only for debugging/diagnoses purposes..):
kubectl run abcxyz123 --image=diyapploopback --image-pull-policy=Never
If I describe the above deployment/container I get:
Name: abcxyz123-6749977548-stvsm
Namespace: diyclientapps
Node: minikube/192.168.99.100
Start Time: Wed, 07 Mar 2018 14:26:33 +0000
Labels: pod-template-hash=2305533104
run=abcxyz123
Annotations: <none>
Status: Running
IP: 172.17.0.9
Controllers: <none>
Containers:
abcxyz123:
Container ID: docker://c9b71667feba21ef259a395c9b8504e3e4968e5b9b35a191963f0576d0631d11
Image: diyapploopback
Image ID: docker://sha256:9590c4dc2ed16cb70a21c3385b7e0519ad0b1fece79e343a19337131600aa866
Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 07 Mar 2018 14:42:45 +0000
Finished: Wed, 07 Mar 2018 14:42:48 +0000
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c62fx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-c62fx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c62fx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
17m 17m 1 default-scheduler Normal Scheduled Successfully assigned abcxyz123-6749977548-stvsm to minikube
17m 17m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-c62fx"
17m 15m 5 kubelet, minikube spec.containers{abcxyz123} Normal Pulled Container image "diyapploopback" already present on machine
17m 15m 5 kubelet, minikube spec.containers{abcxyz123} Normal Created Created container
17m 15m 5 kubelet, minikube spec.containers{abcxyz123} Normal Started Started container
16m 1m 66 kubelet, minikube spec.containers{abcxyz123} Warning BackOff Back-off restarting failed container
imagePullPolicy: IfNotPresent
The above was not present (and it is required...) in my image config within the deployment...

How to add kubernetes liveness probe

I am writing a simple YAML file to apply liveness probe using a TCP port on Centos.6
I pulled a centos:6 image from public repository
started a container using the image.
installed mysql, and started it to verify a opened port (3306)
committed to local repository as "mysql-test:v0.1"
and apply a pod as below
apiVersion: v1
kind: Pod
metadata:
labels:
test: mysql-test
name: mysql-test-exec
spec:
containers:
- name: mysql-test
args:
- /sbin/service
- mysqld
- start
image: mysql-test:v0.1
livenessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 20
But, the status of the pod is CrashLoopBackOff, and the status of the container on work02 is Exited.
1) master node
root#kubeadm-master01:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-test-exec 0/1 CrashLoopBackOff 6 8m
root#kubeadm-master01:~# kubectl describe pod mysql-test-exec
.
.
.
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 default-scheduler Normal Scheduled Successfully assigned mysql-test-exec to kubeadm-work02
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Created Created container with id abbad6585021151cd86fdfb3a9f733245f603686c90f533f23
44397c97c36918
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Started Started container with id abbad6585021151cd86fdfb3a9f733245f603686c90f533f23
44397c97c36918
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Started Started container with id a1062083089eed109fe8f41344136631bb9d4c08a2c6454dc7
7f677f01a48666
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Created Created container with id a1062083089eed109fe8f41344136631bb9d4c08a2c6454dc7
7f677f01a48666
1m 1m 3 kubelet, kubeadm-work02 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysql-test" wit
h CrashLoopBackOff: "Back-off 10s restarting failed container=mysql-test pod=mysql-test-exec_default(810c37bd-7a8c-11e7-9224-525400603584)"
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Created Created container with id 79512aeaf8a6b4692e11b344adb24763343bb2a06c9003222097962822d42202
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Started Started container with id 79512aeaf8a6b4692e11b344adb24763343bb2a06c9003222097962822d42202
1m 43s 3 kubelet, kubeadm-work02 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysql-test" with CrashLoopBackOff: "Bac
k-off 20s restarting failed container=mysql-test pod=mysql-test-exec_default(810c37bd-7a8c-11e7-9224-525400603584)"
29s 29s 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Started Started container with id 4427a3b8e5320b284ac764c1152def4ba749e4f656b3c464a472514bccf2e30e
1m 29s 4 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Pulled Container image "centos-mysql:v0.1" already present on machine
29s 29s 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Created Created container with id 4427a3b8e5320b284ac764c1152def4ba749e4f656b3c464a472514bccf2e30e
1m 10s 9 kubelet, kubeadm-work02 spec.containers{mysql-test} Warning BackOff Back-off restarting failed container
27s 10s 3 kubelet, kubeadm-work02 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysql-test" with CrashLoopBackOff: "Bac
k-off 40s restarting failed container=mysql-test pod=mysql-test-exec_default(810c37bd-7a8c-11e7-9224-525400603584)"
2) work node
root#kubeadm-work02:~# docker logs f64e20bf33a8
Starting mysqld: [ OK ]
I have to remove the args to work with docker image. below deployment works for me.
apiVersion: v1
kind: Pod
metadata:
labels:
test: mysql-test
name: mysql-test-exec
spec:
containers:
- name: mysql-test
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: mysql456
livenessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 20

deis builder keep restart with liveness probe fail

I tried to delete the pods, or rescale the replicas, or delete the aws instances, but still cannot make the deis builder work normally. It keeps restart with failed liveness probe. Below the logs from the deis builder
$ kubectl describe pods/deis-builder-2995120344-mz2zg -n deis
Name: deis-builder-2995120344-mz2zg
Namespace: deis
Node: ip-10-0-48-189.ec2.internal/10.0.48.189
Start Time: Wed, 15 Mar 2017 22:29:03 -0400
Labels: app=deis-builder
pod-template-hash=2995120344
Status: Running
IP: 10.34.184.7
Controllers: ReplicaSet/deis-builder-2995120344
Containers:
deis-builder:
Container ID: docker://f2b7799712c347759832270716057b6ac3be68298eef3057c25727b66024c84a
Image: quay.io/deis/builder:v2.7.1
Image ID: docker-pullable://quay.io/deis/builder#sha256:3dab1dd4e6359d1588fee1b4f93ef9f5c70f268f17de5bed4bc13faa210ce5d0
Ports: 2223/TCP, 8092/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Wed, 15 Mar 2017 22:37:37 -0400
Finished: Wed, 15 Mar 2017 22:38:15 -0400
Ready: False
Restart Count: 7
Liveness: http-get http://:8092/healthz delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8092/readiness delay=30s timeout=1s period=10s #success=1 #failure=3
Volume Mounts:
/var/run/secrets/api/auth from builder-key-auth (ro)
/var/run/secrets/deis/builder/ssh from builder-ssh-private-keys (ro)
/var/run/secrets/deis/objectstore/creds from objectstore-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from deis-builder-token-qbqff (ro)
Environment Variables:
DEIS_REGISTRY_SERVICE_HOST: 127.0.0.1
DEIS_REGISTRY_SERVICE_PORT: 5555
HEALTH_SERVER_PORT: 8092
EXTERNAL_PORT: 2223
BUILDER_STORAGE: s3
DEIS_REGISTRY_LOCATION: ecr
DEIS_REGISTRY_SECRET_PREFIX: private-registry
GIT_LOCK_TIMEOUT: 10
SLUGBUILDER_IMAGE_NAME: <set to the key 'image' of config map 'slugbuilder-config'>
SLUG_BUILDER_IMAGE_PULL_POLICY: <set to the key 'pullpolicy' of config map 'slugbuilder-config'>
DOCKERBUILDER_IMAGE_NAME: <set to the key 'image' of config map 'dockerbuilder-config'>
DOCKER_BUILDER_IMAGE_PULL_POLICY: <set to the key 'pullpolicy' of config map 'dockerbuilder-config'>
DOCKERIMAGE: 1
DEIS_DEBUG: false
POD_NAMESPACE: deis (v1:metadata.namespace)
DEIS_BUILDER_KEY: <set to the key 'builder-key' in secret 'builder-key-auth'>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
builder-key-auth:
Type: Secret (a volume populated by a Secret)
SecretName: builder-key-auth
builder-ssh-private-keys:
Type: Secret (a volume populated by a Secret)
SecretName: builder-ssh-private-keys
objectstore-creds:
Type: Secret (a volume populated by a Secret)
SecretName: objectstorage-keyfile
deis-builder-token-qbqff:
Type: Secret (a volume populated by a Secret)
SecretName: deis-builder-token-qbqff
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type ReasonMessage
--------- -------- ----- ---- ------------- -------- -------------
10m 10m 1 {default-scheduler } Normal Scheduled Successfully assigned deis-builder-2995120344-mz2zg to ip-10-0-48-189.ec2.internal
10m 10m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id 7eac3a357f61
10m 10m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id 7eac3a357f61; Security:[seccomp=unconfined]
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id 8e730f2731ef; Security:[seccomp=unconfined]
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id 8e730f2731ef
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id 7eac3a357f61: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id 5f4e695c595a; Security:[seccomp=unconfined]
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id 8e730f2731ef: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id 5f4e695c595a
8m 8m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id c87d762fc118; Security:[seccomp=unconfined]
8m 8m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id c87d762fc118
8m 8m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id 5f4e695c595a: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
7m 7m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id 416573d43fe4; Security:[seccomp=unconfined]
7m 7m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id 416573d43fe4
7m 7m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id c87d762fc118: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
7m 7m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id 416573d43fe4: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
7m 6m 4 {kubelet ip-10-0-48-189.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "deis-builder" with CrashLoopBackOff: "Back-off 40s restarting failed container=deis-builder pod=deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)"
6m 6m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Created Created container with docker id bf5b29729c27; Security:[seccomp=unconfined]
6m 6m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Started Started container with docker id bf5b29729c27
9m 5m 4 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Warning Unhealthy Readiness probe failed: Get http://10.34.184.7:8092/readiness: dial tcp 10.34.184.7:8092: getsockopt: connection refused
9m 5m 4 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Warning Unhealthy Liveness probe failed: Get http://10.34.184.7:8092/healthz: dial tcp 10.34.184.7:8092: getsockopt: connection refused
5m 5m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Started Started container with docker id e457328db858
5m 5m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Created Created container with docker id e457328db858; Security:[seccomp=unconfined]
5m 5m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Killing Killing container with docker id bf5b29729c27: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
5m 5m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Killing Killing container with docker id e457328db858: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
5m 2m 13 {kubelet ip-10-0-48-189.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "deis-builder" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=deis-builder pod=deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)"
2m 2m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Started Started container with docker id f2b7799712c3
10m 2m 8 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Pulled Container image "quay.io/deis/builder:v2.7.1" already present on machine
2m 2m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Created Created container with docker id f2b7799712c3; Security:[seccomp=unconfined]
10m 1m 6 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Warning Unhealthy Liveness probe failed: Get http://10.34.184.7:8092/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
1m 1m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Killing Killing container with docker id f2b7799712c3: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
7m 9s 26 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Warning BackOff Back-off restarting failed docker container
1m 9s 9 {kubelet ip-10-0-48-189.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "deis-builder" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=deis-builder pod=deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)"
What does helm ls show for the workflow version of deis?
Anything showing up in output for the logs for the container when you run the command below?
kubectl --namespace deis logs deis-builder-2995120344-mz2zg
That logs bit will help with anyone trying to help you figure out your unhealthy builder.
My solution was to delete deis and redeploy it.

after create yaml in image:fernandoacorreia/ubuntu-14.04-oracle-java-1.7,pod is not running

after create yaml in image:fernandoacorreia/ubuntu-14.04-oracle-java-1.7,pod is not running
Kubernetes version (use kubectl version):
1.5.2
Environment:
--ENV:K8S 1.5.2 DOCKER 1.13.1 OS UBUNTU16.04
--already finish pull image
docker pull fernandoacorreia/ubuntu-14.04-oracle-java-1.7
What happened:
step1:
vi ubuntu_ora.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ubuntu-ora
spec:
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ununtu
image: fernandoacorreia/ubuntu-14.04-oracle-java-1.7
ports:
- containerPort: 80
step2:
kubectl create -f ubuntu_ora.yaml
step3:
openinstall#k8master:~$ kubectl get pods --all-namespaces|grep ubuntu-ora
NAMESPACE NAME READY STATUS RESTARTS AGE
default ubuntu-ora-4001744982-pvjwv 0/1 CrashLoopBackOff 1 10s
step4:
kubectl describe pod ubuntu-ora-4001744982-pvjwv
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
45s 45s 1 {default-scheduler } Normal Scheduled Successfully assigned ubuntu-ora-4001744982-pvjwv to k8node3
38s 38s 1 {kubelet k8node3} spec.containers{ununtu} Normal Created Created container with docker id 0c14684efe67; Security:[seccomp=unconfined]
38s 38s 1 {kubelet k8node3} spec.containers{ununtu} Normal Started Started container with docker id 0c14684efe67
36s 36s 1 {kubelet k8node3} spec.containers{ununtu} Normal Created Created container with docker id d0df08a7f2c9; Security:[seccomp=unconfined]
36s 36s 1 {kubelet k8node3} spec.containers{ununtu} Normal Started Started container with docker id d0df08a7f2c9
35s 34s 2 {kubelet k8node3} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ununtu" with CrashLoopBackOff: "Back-off 10s restarting failed container=ununtu pod=ubuntu-ora-4001744982-pvjwv_default(4f739fd7-fe78-11e6-96e3-7824af3fe739)"
44s 18s 3 {kubelet k8node3} spec.containers{ununtu} Normal Pulling pulling image "fernandoacorreia/ubuntu-14.04-oracle-java-1.7"
38s 17s 3 {kubelet k8node3} spec.containers{ununtu} Normal Pulled Successfully pulled image "fernandoacorreia/ubuntu-14.04-oracle-java-1.7"
17s 17s 1 {kubelet k8node3} spec.containers{ununtu} Normal Created Created container with docker id 89faf423f478; Security:[seccomp=unconfined]
17s 17s 1 {kubelet k8node3} spec.containers{ununtu} Normal Started Started container with docker id 89faf423f478
35s 3s 4 {kubelet k8node3} spec.containers{ununtu} Warning BackOff Back-off restarting failed docker container
16s 3s 2 {kubelet k8node3} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ununtu" with CrashLoopBackOff: "Back-off 20s restarting failed container=ununtu pod=ubuntu-ora-4001744982-pvjwv_default(4f739fd7-fe78-11e6-96e3-7824af3fe739)"
step5:
docker ps -a|grep ubun
b708ebc0ebcb fernandoacorreia/ubuntu-14.04-oracle-java-1.7 "java" About a minute ago Exited (1) About a minute ago k8s_ununtu.94603cd2_ubuntu-ora-4001744982-pvjwv_default_4f739fd7-fe78-11e6-96e3-7824af3fe739_668ca1c5
ec287bb33333 gcr.io/google_containers/pause-amd64:3.0 "/pause" 4 minutes ago Up 4 minutes k8s_POD.d8dbe16c_ubuntu-ora-4001744982-pvjwv_default_4f739fd7-fe78-11e6-96e3-7824af3fe739_afa16f4c
you see to:
https://github.com/kubernetes/kubernetes/issues/42377