Liveness probe failure output not in kubelet logs - kubernetes

The output from a liveness probe failure when using exec does not show up in the kubelet event logs (kubernetes version 1.3.2).
For example, I created a pod from the liveness probe example here: http://kubernetes.io/docs/user-guide/liveness/
Using exec-liveness.yaml, I do not get any output from why the Liveness probe failed:
Events:
FirstSeen LastSeen Count From SubobjectPath TypeReason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to my-node
1m 1m 1 {kubelet my-node} spec.containers{liveness} Normal Pulling pulling image "gcr.io/google_containers/busybox"
1m 1m 1 {kubelet my-node} spec.containers{liveness} Normal Pulled Successfully pulled image "gcr.io/google_containers/busybox"
1m 1m 1 {kubelet my-node} spec.containers{liveness} Normal Created Created container with docker id e84949417706
1m 1m 1 {kubelet my-node} spec.containers{liveness} Normal Started Started container with docker id e84949417706
44s 24s 3 {kubelet my-node} spec.containers{liveness} Warning Unhealthy Liveness probe failed:

This is a bug that will be fixed in Kubernetes v1.4, by https://github.com/kubernetes/kubernetes/pull/30731.

Related

GKE Deploy issue - Free Tier with credit - Workloads

I am trying to deploy on a minimal cluster and failing
How can I tweak the configuration to make the availability green?
My Input:
My application is a spring- angular (please suggest an easy way where I can deploy both)
My docker-compose created 2 containers. I pushed them to registry (tagged)
When deploying in Workload, I added 1 container after another, and clicked Deploy. The result error is above
Is there a file I need to create - a kind of yml or yaml etc?
kubectl get pods
> NAME READY STATUS RESTARTS AGE
> nginx-1-d...7-2s6hb 0/2 CrashLoopBackOff 18 25m
> nginx-1-6..d7-7645w 0/2 CrashLoopBackOff 18 25m
> nginx-1-6...7-9qgjx 0/2 CrashLoopBackOff 18 25m
Events from describe
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned default/nginx-1-5d...56xp4 to gke-cluster-huge-default-pool-b6..60-4rj5
Normal Pulling 17m kubelet Pulling image "eu.gcr.io/p..my/py...my_appserver#sha256:479bf3e12ee2b410d730...579b940adc8845be74956f5"
Normal Pulled 17m kubelet Successfully pulled image "eu.gcr.io/py..my/py...emy_appserver#sha256:479bf3e12ee2b4..8b99a178ee05e8579b940adc8845be74956f5" in 11.742649177s
Normal Created 15m (x5 over 17m) kubelet Created container p..my-appserver-sha256-1
Normal Started 15m (x5 over 17m) kubelet Started container p..emy-appserver-sha256-1
Normal Pulled 15m (x4 over 17m) kubelet Container image "eu.gcr.io/py...my/pya...my_appserver#sha256:479bf3e12ee2b41..e05e8579b940adc8845be74956f5" already present on machine
Warning BackOff 2m42s (x64 over 17m) kubelet Back-off restarting failed container

Error response from daemon: Get https://armdocker.rnd.se/v1/_ping: Not Found

I am using Kubernetes version 1.10.
I am trying to pull an image from a local docker repo. I already have the correct secret created.
[root#node1 ~]# kubectl get secret
NAME TYPE DATA AGE
arm-docker kubernetes.io/dockerconfigjson 1 10m
Checking the secret in detail gives me the correct auth token
[root#node1 ~]# kubectl get secret arm-docker --output="jsonpath={.data.\.dockerconfigjson}" | base64 -d
{"auths":{"armdocker.rnd.se":{"username":"<MY-USERNAME>","password":"<MY-PASSWORD>","email":"<MY-EMAIL>","auth":"<CORRECT_AUTH_TOKEN>"}}}
But when I create a Pod, Im getting the following ERROR :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13s default-scheduler Successfully assigned authorization-backend-deployment-8fd5fc8d4-msxvd to node6
Normal SuccessfulMountVolume 13s kubelet, node6 MountVolume.SetUp succeeded for volume "default-token-w7vlf"
Normal BackOff 4s (x4 over 10s) kubelet, node6 Back-off pulling image "armdocker.rnd.se/proj/authorization_backend:3.6.15"
Warning Failed 4s (x4 over 10s) kubelet, node6 Error: ImagePullBackOff
Normal Pulling 1s (x2 over 12s) kubelet, node6 pulling image "armdocker.rnd.se/proj/authorization_backend:3.6.15"
Warning Failed 1s (x2 over 12s) kubelet, node6 Failed to pull image "armdocker.rnd.se/proj/authorization_backend:3.6.15": rpc error: code = Unknown desc = Error response from daemon: Get https://armdocker.rnd.se/v1/_ping: Not Found
Warning Failed 1s (x2 over 12s) kubelet, node6 Error: ErrImagePull
Why is it looking for /v1/_ping ? Can I disable this somehow ?
Im unable to understand what is the problem here.
Once defined your secret, you need to use it inside your pod (you didn't whether you used it).
kind: Pod
...
spec:
imagePullSecrets:
- name: arm-docker

"kubectl set image" fails with ErrImagePull

Very often when I want to deploy new image with "kubectl set image" it's failing with ErrImagePull status, and then fixes itself after some time (up to few hours). These are events from "kubectl describe pod":
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
36m 36m 1 {default-scheduler } Normal Scheduled Successfully assigned zzz-staging-2373868389-62tgk to gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5
36m 12m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} spec.containers{zzz-staging} Normal Pulling pulling image "us.gcr.io/yyyy-staging/zzz:latest"
31m 11m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} spec.containers{zzz-staging} Warning Failed Failed to pull image "us.gcr.io/yyyy-staging/zzz:latest": net/http: request canceled
31m 11m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "zzz-staging" with ErrImagePull: "net/http: request canceled"
16m 7m 3 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} spec.containers{zzz-staging} Normal BackOff Back-off pulling image "us.gcr.io/yyyy-staging/zzz:latest"
16m 7m 7m 3 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "zzz-staging" with ImagePullBackOff: "Back-off pulling image \"us.gcr.io/yyyy-staging/zzz:latest\""
24m 7m 5m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} spec.containers{zzz-staging} Warning InspectFailed Failed to inspect image "us.gcr.io/yyyy-staging/zzz:latest": operation timeout: context deadline exceeded
24m 7m 5m 6 {kubelet gke-xxxxxxxxxx-staging-default-pool-ac6a32f4-09h5} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "zzz-staging" with ImageInspectError: "Failed to inspect image \"us.gcr.io/yyyy-staging/zzz:latest\": operation timeout: context deadline exceeded"
Is there a way to avoid that?

NFS volume sharing issue between wordpress pod and mysql pod

This repository kubernetes-wordpress-with-nfs-volume-on-gke is trying to implement a wordpress application that shares an NFS volume between mySQL and wordpress. The idea behind sharing a NFS volume between pods is to implement in the next step a StatefulSet for mySQL. This StatefulSet application will need to share the database (the volume of the database) between all the pods of mySQL so that a multi node database is created that ensures the requested high performance.
To do that, there is an example janakiramm/wp-statefulset. This example is using etcd. So why not using nfs in stead of etcd?
The commands to run to create this kubernetes wordpress application that shared the NFS volume between MySQL and wordpress are:
kubectl create -f 01-pv-gce.yml
kubectl create -f 02-dep-nfs.yml
kubectl create -f 03-srv-nfs.yml
kubectl get services # you have to update the file 04-pv-pvc with the new IP address of the service
kubectl create -f 04-pv-pvc.yml
kubectl create -f 05-mysql.yml
kubectl create -f 06-wordpress.yml
This implementation did not succeed. The wordpress pod is not starting:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-2899972627-jgjx0 1/1 Running 0 4m
wp01-mysql-1941769936-m9jjd 1/1 Running 0 3m
wp01-wordpress-2362719074-bv53t 0/1 CrashLoopBackOff 4 2m
It seems to be that there is a problem to access to NFS volume as described below:
$ kubectl describe pods wp01-wordpress-2362719074-bv53t
Name: wp01-wordpress-2362719074-bv53t
Namespace: default
Node: gke-mappedinn-cluster-default-pool-6264f94a-z0sh/10.240.0.4
Start Time: Thu, 04 May 2017 05:59:12 +0400
Labels: app=wp01
pod-template-hash=2362719074
tier=frontend
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"wp01-wordpress-2362719074","uid":"44b91da0-306d-11e7-a0d1-42010a...
kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container wordpress
Status: Running
IP: 10.244.0.4
Controllers: ReplicaSet/wp01-wordpress-2362719074
Containers:
wordpress:
Container ID: docker://658c7392c1b7a5033fe1a1b456a9653161003ee2878a4f02c6a12abb49241d47
Image: wordpress:4.6.1-apache
Image ID: docker://sha256:ee397259d4e59c65e2c1c5979a3634eb3ab106bba389acea8b21862053359134
Port: 80/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 04 May 2017 06:03:16 +0400
Finished: Thu, 04 May 2017 06:03:16 +0400
Ready: False
Restart Count: 5
Requests:
cpu: 100m
Environment:
WORDPRESS_DB_HOST: wp01-mysql
WORDPRESS_DB_PASSWORD: <set to the key 'password' in secret 'wp01-pwd-wordpress'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-k650h (ro)
/var/www/html from wordpress-persistent-storage (rw)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
wordpress-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: wp01-pvc-data
ReadOnly: false
default-token-k650h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-k650h
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 5m 1 default-scheduler Normal Scheduled Successfully assigned wp01-wordpress-2362719074-bv53t to gke-mappedinn-cluster-default-pool-6264f94a-z0sh
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Pulling pulling image "wordpress:4.6.1-apache"
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Pulled Successfully pulled image "wordpress:4.6.1-apache"
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 8647e997d6f4; Security:[seccomp=unconfined]
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 8647e997d6f4
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 37f4f0fd392d; Security:[seccomp=unconfined]
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 37f4f0fd392d
4m 4m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 10s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id b78a661388a2; Security:[seccomp=unconfined]
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id b78a661388a2
3m 3m 2 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 20s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 2b6384407678; Security:[seccomp=unconfined]
3m 3m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 2b6384407678
3m 2m 4 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 40s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
2m 2m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 930a3410b213; Security:[seccomp=unconfined]
2m 2m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 930a3410b213
2m 1m 7 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "wordpress" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=wordpress pod=wp01-wordpress-2362719074-bv53t_default(44ba1226-306d-11e7-a0d1-42010a8e0084)"
4m 1m 5 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Pulled Container image "wordpress:4.6.1-apache" already present on machine
1m 1m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Created Created container with docker id 658c7392c1b7; Security:[seccomp=unconfined]
1m 1m 1 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Normal Started Started container with docker id 658c7392c1b7
4m 10s 19 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh spec.containers{wordpress} Warning BackOff Back-off restarting failed docker container
1m 10s 5 kubelet, gke-mappedinn-cluster-default-pool-6264f94a-z0sh
Could you please help on that issue?

deis builder keep restart with liveness probe fail

I tried to delete the pods, or rescale the replicas, or delete the aws instances, but still cannot make the deis builder work normally. It keeps restart with failed liveness probe. Below the logs from the deis builder
$ kubectl describe pods/deis-builder-2995120344-mz2zg -n deis
Name: deis-builder-2995120344-mz2zg
Namespace: deis
Node: ip-10-0-48-189.ec2.internal/10.0.48.189
Start Time: Wed, 15 Mar 2017 22:29:03 -0400
Labels: app=deis-builder
pod-template-hash=2995120344
Status: Running
IP: 10.34.184.7
Controllers: ReplicaSet/deis-builder-2995120344
Containers:
deis-builder:
Container ID: docker://f2b7799712c347759832270716057b6ac3be68298eef3057c25727b66024c84a
Image: quay.io/deis/builder:v2.7.1
Image ID: docker-pullable://quay.io/deis/builder#sha256:3dab1dd4e6359d1588fee1b4f93ef9f5c70f268f17de5bed4bc13faa210ce5d0
Ports: 2223/TCP, 8092/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Wed, 15 Mar 2017 22:37:37 -0400
Finished: Wed, 15 Mar 2017 22:38:15 -0400
Ready: False
Restart Count: 7
Liveness: http-get http://:8092/healthz delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8092/readiness delay=30s timeout=1s period=10s #success=1 #failure=3
Volume Mounts:
/var/run/secrets/api/auth from builder-key-auth (ro)
/var/run/secrets/deis/builder/ssh from builder-ssh-private-keys (ro)
/var/run/secrets/deis/objectstore/creds from objectstore-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from deis-builder-token-qbqff (ro)
Environment Variables:
DEIS_REGISTRY_SERVICE_HOST: 127.0.0.1
DEIS_REGISTRY_SERVICE_PORT: 5555
HEALTH_SERVER_PORT: 8092
EXTERNAL_PORT: 2223
BUILDER_STORAGE: s3
DEIS_REGISTRY_LOCATION: ecr
DEIS_REGISTRY_SECRET_PREFIX: private-registry
GIT_LOCK_TIMEOUT: 10
SLUGBUILDER_IMAGE_NAME: <set to the key 'image' of config map 'slugbuilder-config'>
SLUG_BUILDER_IMAGE_PULL_POLICY: <set to the key 'pullpolicy' of config map 'slugbuilder-config'>
DOCKERBUILDER_IMAGE_NAME: <set to the key 'image' of config map 'dockerbuilder-config'>
DOCKER_BUILDER_IMAGE_PULL_POLICY: <set to the key 'pullpolicy' of config map 'dockerbuilder-config'>
DOCKERIMAGE: 1
DEIS_DEBUG: false
POD_NAMESPACE: deis (v1:metadata.namespace)
DEIS_BUILDER_KEY: <set to the key 'builder-key' in secret 'builder-key-auth'>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
builder-key-auth:
Type: Secret (a volume populated by a Secret)
SecretName: builder-key-auth
builder-ssh-private-keys:
Type: Secret (a volume populated by a Secret)
SecretName: builder-ssh-private-keys
objectstore-creds:
Type: Secret (a volume populated by a Secret)
SecretName: objectstorage-keyfile
deis-builder-token-qbqff:
Type: Secret (a volume populated by a Secret)
SecretName: deis-builder-token-qbqff
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type ReasonMessage
--------- -------- ----- ---- ------------- -------- -------------
10m 10m 1 {default-scheduler } Normal Scheduled Successfully assigned deis-builder-2995120344-mz2zg to ip-10-0-48-189.ec2.internal
10m 10m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id 7eac3a357f61
10m 10m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id 7eac3a357f61; Security:[seccomp=unconfined]
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id 8e730f2731ef; Security:[seccomp=unconfined]
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id 8e730f2731ef
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id 7eac3a357f61: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id 5f4e695c595a; Security:[seccomp=unconfined]
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id 8e730f2731ef: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
9m 9m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id 5f4e695c595a
8m 8m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id c87d762fc118; Security:[seccomp=unconfined]
8m 8m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id c87d762fc118
8m 8m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id 5f4e695c595a: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
7m 7m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal CreatedCreated container with docker id 416573d43fe4; Security:[seccomp=unconfined]
7m 7m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal StartedStarted container with docker id 416573d43fe4
7m 7m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id c87d762fc118: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
7m 7m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal KillingKilling container with docker id 416573d43fe4: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
7m 6m 4 {kubelet ip-10-0-48-189.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "deis-builder" with CrashLoopBackOff: "Back-off 40s restarting failed container=deis-builder pod=deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)"
6m 6m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Created Created container with docker id bf5b29729c27; Security:[seccomp=unconfined]
6m 6m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Started Started container with docker id bf5b29729c27
9m 5m 4 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Warning Unhealthy Readiness probe failed: Get http://10.34.184.7:8092/readiness: dial tcp 10.34.184.7:8092: getsockopt: connection refused
9m 5m 4 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Warning Unhealthy Liveness probe failed: Get http://10.34.184.7:8092/healthz: dial tcp 10.34.184.7:8092: getsockopt: connection refused
5m 5m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Started Started container with docker id e457328db858
5m 5m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Created Created container with docker id e457328db858; Security:[seccomp=unconfined]
5m 5m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Killing Killing container with docker id bf5b29729c27: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
5m 5m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Killing Killing container with docker id e457328db858: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
5m 2m 13 {kubelet ip-10-0-48-189.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "deis-builder" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=deis-builder pod=deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)"
2m 2m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Started Started container with docker id f2b7799712c3
10m 2m 8 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Pulled Container image "quay.io/deis/builder:v2.7.1" already present on machine
2m 2m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Created Created container with docker id f2b7799712c3; Security:[seccomp=unconfined]
10m 1m 6 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Warning Unhealthy Liveness probe failed: Get http://10.34.184.7:8092/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
1m 1m 1 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Normal Killing Killing container with docker id f2b7799712c3: pod "deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)" container "deis-builder" is unhealthy, it will be killed and re-created.
7m 9s 26 {kubelet ip-10-0-48-189.ec2.internal} spec.containers{deis-builder} Warning BackOff Back-off restarting failed docker container
1m 9s 9 {kubelet ip-10-0-48-189.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "deis-builder" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=deis-builder pod=deis-builder-2995120344-mz2zg_deis(52027ebf-09f0-11e7-8bbf-0a73a2cd36e4)"
What does helm ls show for the workflow version of deis?
Anything showing up in output for the logs for the container when you run the command below?
kubectl --namespace deis logs deis-builder-2995120344-mz2zg
That logs bit will help with anyone trying to help you figure out your unhealthy builder.
My solution was to delete deis and redeploy it.