after create yaml in image:fernandoacorreia/ubuntu-14.04-oracle-java-1.7,pod is not running - kubernetes

after create yaml in image:fernandoacorreia/ubuntu-14.04-oracle-java-1.7,pod is not running
Kubernetes version (use kubectl version):
1.5.2
Environment:
--ENV:K8S 1.5.2 DOCKER 1.13.1 OS UBUNTU16.04
--already finish pull image
docker pull fernandoacorreia/ubuntu-14.04-oracle-java-1.7
What happened:
step1:
vi ubuntu_ora.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ubuntu-ora
spec:
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ununtu
image: fernandoacorreia/ubuntu-14.04-oracle-java-1.7
ports:
- containerPort: 80
step2:
kubectl create -f ubuntu_ora.yaml
step3:
openinstall#k8master:~$ kubectl get pods --all-namespaces|grep ubuntu-ora
NAMESPACE NAME READY STATUS RESTARTS AGE
default ubuntu-ora-4001744982-pvjwv 0/1 CrashLoopBackOff 1 10s
step4:
kubectl describe pod ubuntu-ora-4001744982-pvjwv
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
45s 45s 1 {default-scheduler } Normal Scheduled Successfully assigned ubuntu-ora-4001744982-pvjwv to k8node3
38s 38s 1 {kubelet k8node3} spec.containers{ununtu} Normal Created Created container with docker id 0c14684efe67; Security:[seccomp=unconfined]
38s 38s 1 {kubelet k8node3} spec.containers{ununtu} Normal Started Started container with docker id 0c14684efe67
36s 36s 1 {kubelet k8node3} spec.containers{ununtu} Normal Created Created container with docker id d0df08a7f2c9; Security:[seccomp=unconfined]
36s 36s 1 {kubelet k8node3} spec.containers{ununtu} Normal Started Started container with docker id d0df08a7f2c9
35s 34s 2 {kubelet k8node3} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ununtu" with CrashLoopBackOff: "Back-off 10s restarting failed container=ununtu pod=ubuntu-ora-4001744982-pvjwv_default(4f739fd7-fe78-11e6-96e3-7824af3fe739)"
44s 18s 3 {kubelet k8node3} spec.containers{ununtu} Normal Pulling pulling image "fernandoacorreia/ubuntu-14.04-oracle-java-1.7"
38s 17s 3 {kubelet k8node3} spec.containers{ununtu} Normal Pulled Successfully pulled image "fernandoacorreia/ubuntu-14.04-oracle-java-1.7"
17s 17s 1 {kubelet k8node3} spec.containers{ununtu} Normal Created Created container with docker id 89faf423f478; Security:[seccomp=unconfined]
17s 17s 1 {kubelet k8node3} spec.containers{ununtu} Normal Started Started container with docker id 89faf423f478
35s 3s 4 {kubelet k8node3} spec.containers{ununtu} Warning BackOff Back-off restarting failed docker container
16s 3s 2 {kubelet k8node3} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ununtu" with CrashLoopBackOff: "Back-off 20s restarting failed container=ununtu pod=ubuntu-ora-4001744982-pvjwv_default(4f739fd7-fe78-11e6-96e3-7824af3fe739)"
step5:
docker ps -a|grep ubun
b708ebc0ebcb fernandoacorreia/ubuntu-14.04-oracle-java-1.7 "java" About a minute ago Exited (1) About a minute ago k8s_ununtu.94603cd2_ubuntu-ora-4001744982-pvjwv_default_4f739fd7-fe78-11e6-96e3-7824af3fe739_668ca1c5
ec287bb33333 gcr.io/google_containers/pause-amd64:3.0 "/pause" 4 minutes ago Up 4 minutes k8s_POD.d8dbe16c_ubuntu-ora-4001744982-pvjwv_default_4f739fd7-fe78-11e6-96e3-7824af3fe739_afa16f4c
you see to:
https://github.com/kubernetes/kubernetes/issues/42377

Related

How to add kubernetes liveness probe

I am writing a simple YAML file to apply liveness probe using a TCP port on Centos.6
I pulled a centos:6 image from public repository
started a container using the image.
installed mysql, and started it to verify a opened port (3306)
committed to local repository as "mysql-test:v0.1"
and apply a pod as below
apiVersion: v1
kind: Pod
metadata:
labels:
test: mysql-test
name: mysql-test-exec
spec:
containers:
- name: mysql-test
args:
- /sbin/service
- mysqld
- start
image: mysql-test:v0.1
livenessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 20
But, the status of the pod is CrashLoopBackOff, and the status of the container on work02 is Exited.
1) master node
root#kubeadm-master01:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-test-exec 0/1 CrashLoopBackOff 6 8m
root#kubeadm-master01:~# kubectl describe pod mysql-test-exec
.
.
.
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 default-scheduler Normal Scheduled Successfully assigned mysql-test-exec to kubeadm-work02
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Created Created container with id abbad6585021151cd86fdfb3a9f733245f603686c90f533f23
44397c97c36918
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Started Started container with id abbad6585021151cd86fdfb3a9f733245f603686c90f533f23
44397c97c36918
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Started Started container with id a1062083089eed109fe8f41344136631bb9d4c08a2c6454dc7
7f677f01a48666
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Created Created container with id a1062083089eed109fe8f41344136631bb9d4c08a2c6454dc7
7f677f01a48666
1m 1m 3 kubelet, kubeadm-work02 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysql-test" wit
h CrashLoopBackOff: "Back-off 10s restarting failed container=mysql-test pod=mysql-test-exec_default(810c37bd-7a8c-11e7-9224-525400603584)"
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Created Created container with id 79512aeaf8a6b4692e11b344adb24763343bb2a06c9003222097962822d42202
1m 1m 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Started Started container with id 79512aeaf8a6b4692e11b344adb24763343bb2a06c9003222097962822d42202
1m 43s 3 kubelet, kubeadm-work02 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysql-test" with CrashLoopBackOff: "Bac
k-off 20s restarting failed container=mysql-test pod=mysql-test-exec_default(810c37bd-7a8c-11e7-9224-525400603584)"
29s 29s 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Started Started container with id 4427a3b8e5320b284ac764c1152def4ba749e4f656b3c464a472514bccf2e30e
1m 29s 4 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Pulled Container image "centos-mysql:v0.1" already present on machine
29s 29s 1 kubelet, kubeadm-work02 spec.containers{mysql-test} Normal Created Created container with id 4427a3b8e5320b284ac764c1152def4ba749e4f656b3c464a472514bccf2e30e
1m 10s 9 kubelet, kubeadm-work02 spec.containers{mysql-test} Warning BackOff Back-off restarting failed container
27s 10s 3 kubelet, kubeadm-work02 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "mysql-test" with CrashLoopBackOff: "Bac
k-off 40s restarting failed container=mysql-test pod=mysql-test-exec_default(810c37bd-7a8c-11e7-9224-525400603584)"
2) work node
root#kubeadm-work02:~# docker logs f64e20bf33a8
Starting mysqld: [ OK ]
I have to remove the args to work with docker image. below deployment works for me.
apiVersion: v1
kind: Pod
metadata:
labels:
test: mysql-test
name: mysql-test-exec
spec:
containers:
- name: mysql-test
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: mysql456
livenessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 20

"kubectl set image" fails with ErrImagePull

Very often when I want to deploy new image with "kubectl set image" it's failing with ErrImagePull status, and then fixes itself after some time (up to few hours). These are events from "kubectl describe pod":
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
36m 36m 1 {default-scheduler } Normal Scheduled Successfully assigned zzz-staging-2373868389-62tgk to gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5
36m 12m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} spec.containers{zzz-staging} Normal Pulling pulling image "us.gcr.io/yyyy-staging/zzz:latest"
31m 11m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} spec.containers{zzz-staging} Warning Failed Failed to pull image "us.gcr.io/yyyy-staging/zzz:latest": net/http: request canceled
31m 11m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "zzz-staging" with ErrImagePull: "net/http: request canceled"
16m 7m 3 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} spec.containers{zzz-staging} Normal BackOff Back-off pulling image "us.gcr.io/yyyy-staging/zzz:latest"
16m 7m 7m 3 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "zzz-staging" with ImagePullBackOff: "Back-off pulling image \"us.gcr.io/yyyy-staging/zzz:latest\""
24m 7m 5m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} spec.containers{zzz-staging} Warning InspectFailed Failed to inspect image "us.gcr.io/yyyy-staging/zzz:latest": operation timeout: context deadline exceeded
24m 7m 5m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "zzz-staging" with ImageInspectError: "Failed to inspect image \"us.gcr.io/yyyy-staging/zzz:latest\": operation timeout: context deadline exceeded"
Is there a way to avoid that?

NFS volume sharing issue between wordpress pod and mysql pod

This repository kubernetes-wordpress-with-nfs-volume-on-gke is trying to implement a wordpress application that shares an NFS volume between mySQL and wordpress. The idea behind sharing a NFS volume between pods is to implement in the next step a StatefulSet for mySQL. This StatefulSet application will need to share the database (the volume of the database) between all the pods of mySQL so that a multi node database is created that ensures the requested high performance.
To do that, there is an example janakiramm/wp-statefulset. This example is using etcd. So why not using nfs in stead of etcd?
The commands to run to create this kubernetes wordpress application that shared the NFS volume between MySQL and wordpress are:
kubectl create -f 01-pv-gce.yml
kubectl create -f 02-dep-nfs.yml
kubectl create -f 03-srv-nfs.yml
kubectl get services # you have to update the file 04-pv-pvc with the new IP address of the service
kubectl create -f 04-pv-pvc.yml
kubectl create -f 05-mysql.yml
kubectl create -f 06-wordpress.yml
This implementation did not succeed. The wordpress pod is not starting:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-2899972627-jgjx0 1/1 Running 0 4m
wp01-mysql-1941769936-m9jjd 1/1 Running 0 3m
wp01-wordpress-2362719074-bv53t 0/1 CrashLoopBackOff 4 2m
It seems to be that there is a problem to access to NFS volume as described below:
$ kubectl describe pods wp01-wordpress-2362719074-bv53t
Name: wp01-wordpress-2362719074-bv53t
Namespace: default
Node: gke-mappedinn-cluster-default-pool-6264f94a-z0sh/10.240.0.4
Start Time: Thu, 04 May 2017 05:59:12 +0400
Labels: app=wp01
pod-template-hash=2362719074
tier=frontend
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"wp01-wordpress-2362719074","uid":"44b91da0-306d-11e7-a0d1-42010a...
kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container wordpress
Status: Running
IP: 10.244.0.4
Controllers: ReplicaSet/wp01-wordpress-2362719074
Containers:
wordpress:
Container ID: docker://658c7392c1b7a5033fe1a1b456a9653161003ee2878a4f02c6a12abb49241d47
Image: wordpress:4.6.1-apache
Image ID: docker://sha256:ee397259d4e59c65e2c1c5979a3634eb3ab106bba389acea8b21862053359134
Port: 80/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 04 May 2017 06:03:16 +0400
Finished: Thu, 04 May 2017 06:03:16 +0400
Ready: False
Restart Count: 5
Requests:
cpu: 100m
Environment:
WORDPRESS_DB_HOST: wp01-mysql
WORDPRESS_DB_PASSWORD: <set to the key 'password' in secret 'wp01-pwd-wordpress'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-k650h (ro)
/var/www/html from wordpress-persistent-storage (rw)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
wordpress-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: wp01-pvc-data
ReadOnly: false
default-token-k650h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-k650h
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 5m 1 default-scheduler Normal Scheduled Successfully assigned wp01-wordpress-2362719074-bv53t to gke-mappedinn-cluster-default-pool-6264f94a-z0sh
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Pulling pulling image "wordpress:4.6.1-apache"
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Pulled Successfully pulled image "wordpress:4.6.1-apache"
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 8647e997d6f4; Security:[seccomp=unconfined]
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 8647e997d6f4
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 37f4f0fd392d; Security:[seccomp=unconfined]
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 37f4f0fd392d
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 10s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id b78a661388a2; Security:[seccomp=unconfined]
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id b78a661388a2
3m 3m 2 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 20s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 2b6384407678; Security:[seccomp=unconfined]
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 2b6384407678
3m 2m 4 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 40s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
2m 2m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 930a3410b213; Security:[seccomp=unconfined]
2m 2m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 930a3410b213
2m 1m 7 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
4m 1m 5 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Pulled Container image "wordpress:4.6.1-apache" already present on machine
1m 1m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 658c7392c1b7; Security:[seccomp=unconfined]
1m 1m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 658c7392c1b7
4m 10s 19 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Warning BackOff Back-off restarting failed docker container
1m 10s 5 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh
Could you please help on that issue?

deis builder keep restart with liveness probe fail

I tried to delete the pods, or rescale the replicas, or delete the aws instances, but still cannot make the deis builder work normally. It keeps restart with failed liveness probe. Below the logs from the deis builder
$ kubectl describe pods/deis-builder-2995120344-mz2zg -n deis
Name: deis-builder-2995120344-mz2zg
Namespace: deis
Node: ip-10-0-48-189.ec2.internal/10.0.48.189
Start Time: Wed, 15 Mar 2017 22:29:03 -0400
Labels: app=deis-builder
pod-template-hash=2995120344
Status: Running
IP: 10.34.184.7
Controllers: ReplicaSet/deis-builder-2995120344
Containers:
deis-builder:
Container ID: docker://f2b7799712c347759832270716057b6ac3be68298eef3057c25727b66024c84a
Image: quay.io/deis/builder:v2.7.1
Image ID: docker-pullable://quay.io/deis/builder#sha256:3dab1dd4e6359d1588fee1b4f93ef9f5c70f268f17de5bed4bc13faa210ce5d0
Ports: 2223/TCP, 8092/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Wed, 15 Mar 2017 22:37:37 -0400
Finished: Wed, 15 Mar 2017 22:38:15 -0400
Ready: False
Restart Count: 7
Liveness: http-get http://:8092/healthz delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8092/readiness delay=30s timeout=1s period=10s #success=1 #failure=3
Volume Mounts:
/var/run/secrets/api/auth from builder-key-auth (ro)
/var/run/secrets/deis/builder/ssh from builder-ssh-private-keys (ro)
/var/run/secrets/deis/objectstore/creds from objectstore-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from deis-builder-token-qbqff (ro)
Environment Variables:
DEIS_REGISTRY_SERVICE_HOST: 127.0.0.1
DEIS_REGISTRY_SERVICE_PORT: 5555
HEALTH_SERVER_PORT: 8092
EXTERNAL_PORT: 2223
BUILDER_STORAGE: s3
DEIS_REGISTRY_LOCATION: ecr
DEIS_REGISTRY_SECRET_PREFIX: private-registry
GIT_LOCK_TIMEOUT: 10
SLUGBUILDER_IMAGE_NAME: <set to the key 'image' of config map 'slugbuilder-config'>
SLUG_BUILDER_IMAGE_PULL_POLICY: <set to the key 'pullpolicy' of config map 'slugbuilder-config'>
DOCKERBUILDER_IMAGE_NAME: <set to the key 'image' of config map 'dockerbuilder-config'>
DOCKER_BUILDER_IMAGE_PULL_POLICY: <set to the key 'pullpolicy' of config map 'dockerbuilder-config'>
DOCKERIMAGE: 1
DEIS_DEBUG: false
POD_NAMESPACE: deis (v1:metadata.namespace)
DEIS_BUILDER_KEY: <set to the key 'builder-key' in secret 'builder-key-auth'>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
builder-key-auth:
Type: Secret (a volume populated by a Secret)
SecretName: builder-key-auth
builder-ssh-private-keys:
Type: Secret (a volume populated by a Secret)
SecretName: builder-ssh-private-keys
objectstore-creds:
Type: Secret (a volume populated by a Secret)
SecretName: objectstorage-keyfile
deis-builder-token-qbqff:
Type: Secret (a volume populated by a Secret)
SecretName: deis-builder-token-qbqff
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type ReasonMessage
--------- -------- ----- ---- ------------- -------- -------------
10m 10m 1 {default-scheduler } Normal Scheduled Successfully assigned deis-builder-2995120344-mz2zg to ip-10-0-48-189.ec2.internal
10m 10m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id 7eac3a357f61
10m 10m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id 7eac3a357f61; Security:[seccomp=unconfined]
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id 8e730f2731ef; Security:[seccomp=unconfined]
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id 8e730f2731ef
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id 7eac3a357f61: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id 5f4e695c595a; Security:[seccomp=unconfined]
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id 8e730f2731ef: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id 5f4e695c595a
8m 8m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id c87d762fc118; Security:[seccomp=unconfined]
8m 8m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id c87d762fc118
8m 8m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id 5f4e695c595a: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
7m 7m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id 416573d43fe4; Security:[seccomp=unconfined]
7m 7m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id 416573d43fe4
7m 7m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id c87d762fc118: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
7m 7m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id 416573d43fe4: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
7m 6m 4 {kubelet ip-10-0-48-189.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "deis-builder" with CrashLoopBackOff: "Back-off 40s restarting failed container=deis-builder pod=deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)"
6m 6m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Created Created container with docker id bf5b29729c27; Security:[seccomp=unconfined]
6m 6m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Started Started container with docker id bf5b29729c27
9m 5m 4 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Warning Unhealthy Readiness probe failed: Get http://10.34.184.7:8092/readiness: dial tcp 10.34.184.7:8092: getsockopt: connection refused
9m 5m 4 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Warning Unhealthy Liveness probe failed: Get http://10.34.184.7:8092/healthz: dial tcp 10.34.184.7:8092: getsockopt: connection refused
5m 5m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Started Started container with docker id e457328db858
5m 5m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Created Created container with docker id e457328db858; Security:[seccomp=unconfined]
5m 5m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Killing Killing container with docker id bf5b29729c27: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
5m 5m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Killing Killing container with docker id e457328db858: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
5m 2m 13 {kubelet ip-10-0-48-189.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "deis-builder" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=deis-builder pod=deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)"
2m 2m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Started Started container with docker id f2b7799712c3
10m 2m 8 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Pulled Container image "quay.io/deis/builder:v2.7.1" already present on machine
2m 2m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Created Created container with docker id f2b7799712c3; Security:[seccomp=unconfined]
10m 1m 6 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Warning Unhealthy Liveness probe failed: Get http://10.34.184.7:8092/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
1m 1m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Killing Killing container with docker id f2b7799712c3: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
7m 9s 26 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Warning BackOff Back-off restarting failed docker container
1m 9s 9 {kubelet ip-10-0-48-189.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "deis-builder" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=deis-builder pod=deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)"
What does helm ls show for the workflow version of deis?
Anything showing up in output for the logs for the container when you run the command below?
kubectl --namespace deis logs deis-builder-2995120344-mz2zg
That logs bit will help with anyone trying to help you figure out your unhealthy builder.
My solution was to delete deis and redeploy it.

System error: exec: "deployment": executable file not found in $PATH

I following the example exactly, [http://kubernetes.io/docs/hellonode/,]
after I run [kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080
deployment "hello-node" created] . the pod doesnot run ok, I get CrashLoopBackOff status.I have no deployment exec,
any comment is appreciated.
Nobert
==========================================
norbert688#kubernete-codelab-1264:~/hellonode$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node-2129762707-hte0f 0/1 CrashLoopBackOff 5 6m
norbert688#kubernete-codelab-1264:~/hellonode$ kubectl describe pod hello
Name: hello-node-2129762707-hte0f
Namespace: default
Node: gke-hello-world-16359f5d-node-zkpf/10.140.0.3
Start Time: Mon, 28 Mar 2016 20:07:53 +0800
Labels: pod-template-hash=2129762707,run=hello-node
Status: Running
IP: 10.16.2.3
Controllers: ReplicaSet/hello-node-2129762707
Containers:
hello-node:
Container ID: docker://dfae3b1e068a5b0e89b1791f1acac56148fc649ea5894d36575ce3cd46a2ae3d
Image: gcr.io/kubernete-codelab-1264/hello-node:v1
Image ID: docker://1fab5e6a9ef21db5518db9bcfbafa52799c38609738f5b3e1c4bb875225b5d61
Port: 8080/TCP
Args:
deployment
hello-node
created
QoS Tier:
cpu: Burstable
memory: BestEffort
Requests:
cpu: 100m
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: ContainerCannotRun
Message: [8] System error: exec: "deployment": executable file not found in $PATH
Exit Code: -1
Started: Mon, 28 Mar 2016 20:14:16 +0800
Finished: Mon, 28 Mar 2016 20:14:16 +0800
Ready: False
Restart Count: 6
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-k3zl5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-k3zl5
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Pulling pulling image "gcr.io/kubernete-codelab-1264/hello-node:v1"
6m 6m 1 {default-scheduler } Normal Scheduled Successfully assigned hello-node-2129762707-hte0f to gke-hello-world-16359f5d-node-zkpf
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id 41c8fde8f94b
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id 41c8fde8f94b with error: API error (500): Cannot start container 41c8fde8f94bee697e3f1a3af88e6b347f5b850d9a6a406a5c2e25375e48c87a: [8] System error: exec: "deployment": executable file not found in $PATH
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container 41c8fde8f94bee697e3f1a3af88e6b347f5b850d9a6a406a5c2e25375e48c87a: [8] System error: exec: \"deployment\": executable file not found in $PATH\n"
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id a99c8dc5cc8a
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id a99c8dc5cc8a with error: API error (500): Cannot start container a99c8dc5cc8a884d35f7c69e9e1ba91643f9e9ef8815b95f80aabdf9995a6608: [8] System error: exec: "deployment": executable file not found in $PATH
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container a99c8dc5cc8a884d35f7c69e9e1ba91643f9e9ef8815b95f80aabdf9995a6608: [8] System error: exec: \"deployment\": executable file not found in $PATH\n"
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Pulled Successfully pulled image "gcr.io/kubernete-codelab-1264/hello-node:v1"
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container 977b07a9e5dea5256de4e600d6071e3ac5cc6e9a344cb5354851aab587bff952: [8] System error: exec: \"deployment\": executable file not found in $PATH\n"
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id 977b07a9e5de
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id 977b07a9e5de with error: API error (500): Cannot start container 977b07a9e5dea5256de4e600d6071e3ac5cc6e9a344cb5354851aab587bff952: [8] System error: exec: "deployment": executable file not found in $PATH
5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 20s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)"
5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id f8ad177306bc
5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id f8ad177306bc with error: API error (500): Cannot start container f8ad177306bc6154498befbbc876ee4b2334d3842f269f4579f762434effe33a: [8] System error: exec: "deployment": executable file not found in $PATH
5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container f8ad177306bc6154498befbbc876ee4b2334d3842f269f4579f762434effe33a: [8] System error: exec: \"deployment\": executable file not found in $PATH\n"
5m 4m 3 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 40s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)"
4m 4m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container d9218f5385cb020c752c9e78e3eda87f04fa0428cba92d14a1a73c93a01c8d5b: [8] System error: exec: \"deployment\": executable file not found in $PATH\n"
4m 4m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id d9218f5385cb
4m 4m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id d9218f5385cb with error: API error (500): Cannot start container d9218f5385cb020c752c9e78e3eda87f04fa0428cba92d14a1a73c93a01c8d5b: [8] System error: exec: "deployment": executable file not found in $PATH
4m 3m 7 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)"
3m 3m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container 7c3c680f18c4cb7fa0fd02f538dcbf2e8f8ba94661fe2703c2fb42ed0c908f59: [8] System error: exec: \"deployment\": executable file not found in $PATH\n"
3m 3m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id 7c3c680f18c4 with error: API error (500): Cannot start container 7c3c680f18c4cb7fa0fd02f538dcbf2e8f8ba94661fe2703c2fb42ed0c908f59: [8] System error: exec: "deployment": executable file not found in $PATH
3m 3m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id 7c3c680f18c4
2m 40s 12 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)"
26s 26s 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id dfae3b1e068a with error: API error (500): Cannot start container dfae3b1e068a5b0e89b1791f1acac56148fc649ea5894d36575ce3cd46a2ae3d: [8] System error: exec: "deployment": executable file not found in $PATH
26s 26s 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id dfae3b1e068a
6m 26s 6 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Pulled Container image "gcr.io/kubernete-codelab-1264/hello-node:v1" already present on machine
3m 14s 3 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync (events with common reason combined)
5m 3s 26 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning BackOff Back-off restarting failed docker container
3s 3s 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)"
==========================================
after I run [kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 deployment "hello-node" created]
Do you mean your run kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 deployment "hello-node" created?
If this is the case, then there is no surprise since deployment is not an executable in your PATH.