How to prevent kubernates probing https? - kubernetes

I'm trying to run a service exposed via port 80 and 443. The SSL termination happens on the pod.
I specified only port 80 for liveness probe but for some reasons kubernates is probing https (443) as well. Why is that and how can I stop it probing 443?
Kubernates config
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
namespace: default
data:
.dockerconfigjson: xxx==
type: kubernetes.io/dockerconfigjson
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: example-com
spec:
replicas: 0
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 50%
minReadySeconds: 30
template:
metadata:
labels:
app: example-com
spec:
imagePullSecrets:
- name: myregistrykey
containers:
- name: example-com
image: DOCKER_HOST/DOCKER_IMAGE_VERSION
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
- containerPort: 443
protocol: TCP
name: https
livenessProbe:
httpGet:
scheme: "HTTP"
path: "/_ah/health"
port: 80
httpHeaders:
- name: Host
value: example.com
initialDelaySeconds: 35
periodSeconds: 35
readinessProbe:
httpGet:
scheme: "HTTP"
path: "/_ah/health"
port: 80
httpHeaders:
- name: Host
value: example.com
initialDelaySeconds: 35
periodSeconds: 35
resources:
requests:
cpu: 250m
limits:
cpu: 500m
---
apiVersion: v1
kind: Service
metadata:
name: example-com
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 0
name: http
- port: 443
protocol: TCP
targetPort: 443
nodePort: 0
name: https
selector:
app: example-com
The error/logs on pods clearly indicate that kubernates is trying to access the service via https.
kubectl describe pod example-com-86876875c7-b75hr
Name: example-com-86876875c7-b75hr
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: aks-agentpool-37281605-0/10.240.0.4
Start Time: Sat, 17 Nov 2018 19:58:30 +0200
Labels: app=example-com
pod-template-hash=4243243173
Annotations: <none>
Status: Running
IP: 10.244.0.65
Controlled By: ReplicaSet/example-com-86876875c7
Containers:
example-com:
Container ID: docker://c5eeb03558adda435725a0df3cc2d15943966c3df53e9462e964108969c8317a
Image: example-com.azurecr.io/example-com:2018-11-17_19-58-05
Image ID: docker-pullable://example-com.azurecr.io/example-com#sha256:5d425187b8663ecfc5d6cc78f6c5dd29f1559d3687ba9d4c0421fd0ad109743e
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Sat, 17 Nov 2018 20:07:59 +0200
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Sat, 17 Nov 2018 20:05:39 +0200
Finished: Sat, 17 Nov 2018 20:07:55 +0200
Ready: False
Restart Count: 3
Limits:
cpu: 500m
Requests:
cpu: 250m
Liveness: http-get http://:80/_ah/health delay=35s timeout=1s period=35s #success=1 #failure=3
Readiness: http-get http://:80/_ah/health delay=35s timeout=1s period=35s #success=1 #failure=3
Environment:
NABU: nabu
KUBERNETES_PORT_443_TCP_ADDR: agile-kube-b3e5753f.hcp.westeurope.azmk8s.io
KUBERNETES_PORT: tcp://agile-kube-b3e5753f.hcp.westeurope.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://agile-kube-b3e5753f.hcp.westeurope.azmk8s.io:443
KUBERNETES_SERVICE_HOST: agile-kube-b3e5753f.hcp.westeurope.azmk8s.io
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rcr7c (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-rcr7c:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rcr7c
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned default/example-com-86876875c7-b75hr to aks-agentpool-37281605-0
Warning Unhealthy 3m46s (x6 over 7m16s) kubelet, aks-agentpool-37281605-0 Liveness probe failed: Get https://example.com/_ah/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Normal Pulling 3m45s (x3 over 10m) kubelet, aks-agentpool-37281605-0 pulling image "example-com.azurecr.io/example-com:2018-11-17_19-58-05"
Normal Killing 3m45s (x2 over 6m5s) kubelet, aks-agentpool-37281605-0 Killing container with id docker://example-com:Container failed liveness probe.. Container will be killed andrecreated.
Normal Pulled 3m44s (x3 over 10m) kubelet, aks-agentpool-37281605-0 Successfully pulled image "example-com.azurecr.io/example-com:2018-11-17_19-58-05"
Normal Created 3m42s (x3 over 10m) kubelet, aks-agentpool-37281605-0 Created container
Normal Started 3m42s (x3 over 10m) kubelet, aks-agentpool-37281605-0 Started container
Warning Unhealthy 39s (x9 over 7m4s) kubelet, aks-agentpool-37281605-0 Readiness probe failed: Get https://example.com/_ah/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

As per your comments, you are doing an HTTP to HTTPS redirect in the pod and basically, the probe cannot connect to it. If you still want to serve a probe on port 80 you should consider using TCP probes. For example:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: example-com
spec:
...
minReadySeconds: 30
template:
metadata:
labels:
app: example-com
spec:
imagePullSecrets:
- name: myregistrykey
containers:
- name: example-com
...
livenessProbe:
httpGet:
scheme: "HTTP"
path: "/_ah/health"
port: 80
httpHeaders:
- name: Host
value: example.com
initialDelaySeconds: 35
periodSeconds: 35
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 35
periodSeconds: 35
...
Or you can ignore some redirects in your application depending on the URL, just like mentioned in #night-gold's answer.

The problem doesn't come from Kubernetes but from your web server. Kubernetes is doing exactly what you are asking, probing the http url but your server is redirecting it to https, that is causing the error.
If you are using apache, you should look here Apache https block redirect or there if you use nginx nginx https block redirect

Related

dial tcp 10.1.0.35:8080: connect: connection refused in docker desktop

I'm deploying a simple application image which has readiness, startup and liveness probes through docker desktop app. I tried to search for similar issues but none of them matched the one which I'm facing therefore created this post.
Image : rahulwagh17/kubernetes:jhooq-k8s-springboot
Below is the deployment manifest used.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jhooq-springboot
spec:
replicas: 2
selector:
matchLabels:
app: jhooq-springboot
template:
metadata:
labels:
app: jhooq-springboot
spec:
containers:
- name: springboot
image: rahulwagh17/kubernetes:jhooq-k8s-springboot
resources:
requests:
memory: "128Mi"
cpu: "512m"
limits:
memory: "128Mi"
cpu: "512m"
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /hello
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
livenessProbe:
httpGet:
path: /hello
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
startupProbe:
httpGet:
path: /hello
port: 8080
failureThreshold: 60
periodSeconds: 10
env:
- name: PORT
value: "8080"
---
apiVersion: v1
kind: Service
metadata:
name: jhooq-springboot
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: jhooq-springboot
After deploying, pods status is CrashLoopBackOff due to Startup probe failed: Get "http://10.1.0.36:8080/hello": dial tcp 10.1.0.36:8080: connect: connection refused
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> Successfully assigned default/jhooq-springboot-6dbc755d48-4pqcz to docker-desktop
Warning Unhealthy 7m22s (x7 over 8m42s) kubelet, docker-desktop Startup probe failed: Get "http://10.1.0.36:8080/hello": dial tcp 10.1.0.36:8080: connect: connection refused
Normal Pulled 6m56s (x4 over 8m51s) kubelet, docker-desktop Container image "rahulwagh17/kubernetes:jhooq-k8s-springboot" already present on machine
Normal Created 6m56s (x4 over 8m51s) kubelet, docker-desktop Created container springboot
Normal Started 6m56s (x4 over 8m51s) kubelet, docker-desktop Started container springboot
Warning BackOff 3m40s (x19 over 8m6s) kubelet, docker-desktop Back-off restarting failed container

kubelet does not have ClusterDNS IP configured in Microk8s

I'm using microk8s in ubuntu
I'm trying to run a simple hello world program but I got the error when pod created.
kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy
Here is my deployment.yaml file which I'm trying to apply.
apiVersion: v1
kind: Service
metadata:
name: grpc-hello
spec:
ports:
- port: 80
targetPort: 9000
protocol: TCP
name: http
selector:
app: grpc-hello
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: grpc-hello
spec:
replicas: 1
selector:
matchLabels:
app: grpc-hello
template:
metadata:
labels:
app: grpc-hello
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http2_port=9000",
"--backend=grpc://127.0.0.1:50051",
"--service=hellogrpc.endpoints.octa-test-123.cloud.goog",
"--rollout_strategy=managed",
]
ports:
- containerPort: 9000
- name: python-grpc-hello
image: gcr.io/octa-test-123/python-grpc-hello:1.0
ports:
- containerPort: 50051
Here is what I got when I try to describe the pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31s default-scheduler Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
Normal Started 30s kubelet, azeem-ubuntu Started container python-grpc-hello
Normal Pulled 30s kubelet, azeem-ubuntu Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
Normal Created 30s kubelet, azeem-ubuntu Created container python-grpc-hello
Normal Pulled 12s (x3 over 31s) kubelet, azeem-ubuntu Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
Normal Created 12s (x3 over 31s) kubelet, azeem-ubuntu Created container esp
Normal Started 12s (x3 over 30s) kubelet, azeem-ubuntu Started container esp
Warning MissingClusterDNS 8s (x10 over 31s) kubelet, azeem-ubuntu pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
Warning BackOff 8s (x2 over 23s) kubelet, azeem-ubuntu Back-off restarting failed container
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31s default-scheduler Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
Normal Started 30s kubelet, azeem-ubuntu Started container python-grpc-hello
Normal Pulled 30s kubelet, azeem-ubuntu Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
Normal Created 30s kubelet, azeem-ubuntu Created container python-grpc-hello
Normal Pulled 12s (x3 over 31s) kubelet, azeem-ubuntu Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
Normal Created 12s (x3 over 31s) kubelet, azeem-ubuntu Created container esp
Normal Started 12s (x3 over 30s) kubelet, azeem-ubuntu Started container esp
Warning MissingClusterDNS 8s (x10 over 31s) kubelet, azeem-ubuntu pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
Warning BackOff 8s (x2 over 23s) kubelet, azeem-ubuntu Back-off restarting failed container
I search alot about this I find some answers but no one is working for me I also create the kube-dns for this but don't know why this still is not working. These kube-dns are running. kube-dns are in kube-system namespace.
NAME READY STATUS RESTARTS AGE
kube-dns-6dbd676f7-dfbjq 3/3 Running 0 22m
And here is what I apply to create the kube-dns
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.152.183.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
upstreamNameservers: |-
["8.8.8.8", "8.8.4.4"]
# Why set upstream ns: https://github.com/kubernetes/minikube/issues/2027
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: gcr.io/google-containers/k8s-dns-kube-dns:1.15.8
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: gcr.io/google-containers/k8s-dns-dnsmasq-nanny:1.15.8
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: gcr.io/google-containers/k8s-dns-sidecar:1.15.8
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
serviceAccountName: kube-dns
Please let me know what I'm missing.
You have not specified how you deployed kube dns but with microk8s its recommended to use core dns. You should not deploy kube dns or core dns on your own rather you need to enable dns using this command microk8s enable dns which would deploy core DNS and setup DNS.
I had the same problem with a microk8s cluster. Despite I had already enabled the dns add-on it don't worked.
I looked for the service kube-dns's cluster-ip address with
kubectl -nkube-system get svc/kube-dns
I stopped the microk8s cluster and i edited the kubelet configuration file /var/snap/microk8s/current/args/kubelet and i added the following lines in my case:
--resolv-conf=""
--cluster-dns=A.B.C.D
--cluster-domain=cluster.local
Afterward, started the cluster and the problem don't ocurred again.

NEG says Pods are 'unhealthy', but actually the Pods are healthy

I'm trying to apply gRPC load balancing with Ingress on GCP, and for this I referenced this example. The example shows gRPC load balancing is working by 2 ways(one with envoy side-car and the other one is HTTP mux, handling both gRPC/HTTP-health-check on same Pod.) However, the envoy proxy example doesn't work.
What makes me confused is, the Pods are running/healthy(confirmed by kubectl describe, kubectl logs)
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
fe-deployment-757ffcbd57-4w446 2/2 Running 0 4m22s
fe-deployment-757ffcbd57-xrrm9 2/2 Running 0 4m22s
$ kubectl describe pod fe-deployment-757ffcbd57-4w446
Name: fe-deployment-757ffcbd57-4w446
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc/10.128.0.64
Start Time: Thu, 26 Sep 2019 16:15:18 +0900
Labels: app=fe
pod-template-hash=757ffcbd57
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container fe-envoy; cpu request for container fe-container
Status: Running
IP: 10.56.1.29
Controlled By: ReplicaSet/fe-deployment-757ffcbd57
Containers:
fe-envoy:
Container ID: docker://b4789909494f7eeb8d3af66cb59168e009c582d412d8ca683a7f435559989421
Image: envoyproxy/envoy:latest
Image ID: docker-pullable://envoyproxy/envoy#sha256:9ef9c4fd6189fdb903929dc5aa0492a51d6783777de65e567382ac7d9a28106b
Port: 8080/TCP
Host Port: 0/TCP
Command:
/usr/local/bin/envoy
Args:
-c
/data/config/envoy.yaml
State: Running
Started: Thu, 26 Sep 2019 16:15:19 +0900
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Liveness: http-get https://:fe/_ah/health delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:fe/_ah/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/data/certs from certs-volume (rw)
/data/config from envoy-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c7nqc (ro)
fe-container:
Container ID: docker://a533224d3ea8b5e4d5e268a616d73762b37df69f434342459f35caa8fac32dab
Image: salrashid123/grpc_only_backend
Image ID: docker-pullable://salrashid123/grpc_only_backend#sha256:ebfac594116445dd67aff7c9e7a619d73222b60947e46ef65ee6d918db3e1f4b
Port: 50051/TCP
Host Port: 0/TCP
Command:
/grpc_server
Args:
--grpcport
:50051
--insecure
State: Running
Started: Thu, 26 Sep 2019 16:15:20 +0900
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c7nqc (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
certs-volume:
Type: Secret (a volume populated by a Secret)
SecretName: fe-secret
Optional: false
envoy-config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: envoy-configmap
Optional: false
default-token-c7nqc:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c7nqc
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m25s default-scheduler Successfully assigned default/fe-deployment-757ffcbd57-4w446 to gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc
Normal Pulled 4m25s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc Container image "envoyproxy/envoy:latest" already present on machine
Normal Created 4m24s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc Created container
Normal Started 4m24s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc Started container
Normal Pulling 4m24s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc pulling image "salrashid123/grpc_only_backend"
Normal Pulled 4m24s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc Successfully pulled image "salrashid123/grpc_only_backend"
Normal Created 4m24s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc Created container
Normal Started 4m23s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc Started container
Warning Unhealthy 4m10s (x2 over 4m20s) kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc Readiness probe failed: HTTP probe failed with statuscode: 503
Warning Unhealthy 4m9s (x2 over 4m19s) kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc Liveness probe failed: HTTP probe failed with statuscode: 503
$ kubectl describe pod fe-deployment-757ffcbd57-xrrm9
Name: fe-deployment-757ffcbd57-xrrm9
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-ingress-grpc-loadbal-default-pool-92d3aed5-52l9/10.128.0.22
Start Time: Thu, 26 Sep 2019 16:15:18 +0900
Labels: app=fe
pod-template-hash=757ffcbd57
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container fe-envoy; cpu request for container fe-container
Status: Running
IP: 10.56.0.23
Controlled By: ReplicaSet/fe-deployment-757ffcbd57
Containers:
fe-envoy:
Container ID: docker://255dd6cab1e681e30ccfe158f7d72540576788dbf6be60b703982a7ecbb310b1
Image: envoyproxy/envoy:latest
Image ID: docker-pullable://envoyproxy/envoy#sha256:9ef9c4fd6189fdb903929dc5aa0492a51d6783777de65e567382ac7d9a28106b
Port: 8080/TCP
Host Port: 0/TCP
Command:
/usr/local/bin/envoy
Args:
-c
/data/config/envoy.yaml
State: Running
Started: Thu, 26 Sep 2019 16:15:19 +0900
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Liveness: http-get https://:fe/_ah/health delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:fe/_ah/health delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/data/certs from certs-volume (rw)
/data/config from envoy-config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c7nqc (ro)
fe-container:
Container ID: docker://f6a0246129cc89da846c473daaa1c1770d2b5419b6015098b0d4f35782b0a9da
Image: salrashid123/grpc_only_backend
Image ID: docker-pullable://salrashid123/grpc_only_backend#sha256:ebfac594116445dd67aff7c9e7a619d73222b60947e46ef65ee6d918db3e1f4b
Port: 50051/TCP
Host Port: 0/TCP
Command:
/grpc_server
Args:
--grpcport
:50051
--insecure
State: Running
Started: Thu, 26 Sep 2019 16:15:20 +0900
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c7nqc (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
certs-volume:
Type: Secret (a volume populated by a Secret)
SecretName: fe-secret
Optional: false
envoy-config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: envoy-configmap
Optional: false
default-token-c7nqc:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c7nqc
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m8s default-scheduler Successfully assigned default/fe-deployment-757ffcbd57-xrrm9 to gke-ingress-grpc-loadbal-default-pool-92d3aed5-52l9
Normal Pulled 5m8s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-52l9 Container image "envoyproxy/envoy:latest" already present on machine
Normal Created 5m7s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-52l9 Created container
Normal Started 5m7s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-52l9 Started container
Normal Pulling 5m7s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-52l9 pulling image "salrashid123/grpc_only_backend"
Normal Pulled 5m7s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-52l9 Successfully pulled image "salrashid123/grpc_only_backend"
Normal Created 5m7s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-52l9 Created container
Normal Started 5m6s kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-52l9 Started container
Warning Unhealthy 4m53s (x2 over 5m3s) kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-52l9 Readiness probe failed: HTTP probe failed with statuscode: 503
Warning Unhealthy 4m52s (x2 over 5m2s) kubelet, gke-ingress-grpc-loadbal-default-pool-92d3aed5-52l9 Liveness probe failed: HTTP probe failed with statuscode: 503
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fe-srv-ingress NodePort 10.123.5.165 <none> 8080:30816/TCP 6m43s
fe-srv-lb LoadBalancer 10.123.15.36 35.224.69.60 50051:30592/TCP 6m42s
kubernetes ClusterIP 10.123.0.1 <none> 443/TCP 2d2h
$ kubectl describe service fe-srv-ingress
Name: fe-srv-ingress
Namespace: default
Labels: type=fe-srv
Annotations: cloud.google.com/neg: {"ingress": true}
cloud.google.com/neg-status:
{"network_endpoint_groups":{"8080":"k8s1-963b7b91-default-fe-srv-ingress-8080-e459b0d2"},"zones":["us-central1-a"]}
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/neg":"{\"ingress\": true}","service.alpha.kubernetes.io/a...
service.alpha.kubernetes.io/app-protocols: {"fe":"HTTP2"}
Selector: app=fe
Type: NodePort
IP: 10.123.5.165
Port: fe 8080/TCP
TargetPort: 8080/TCP
NodePort: fe 30816/TCP
Endpoints: 10.56.0.23:8080,10.56.1.29:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Create 6m47s neg-controller Created NEG "k8s1-963b7b91-default-fe-srv-ingress-8080-e459b0d2" for default/fe-srv-ingress-8080/8080 in "us-central1-a".
Normal Attach 6m40s neg-controller Attach 2 network endpoint(s) (NEG "k8s1-963b7b91-default-fe-srv-ingress-8080-e459b0d2" in zone "us-central1-a")
but NEG says they are unhealthy(so Ingress also says backend is unhealthy).
I couldn't found what caused this. Does anyone know how to solve this?
Test environment:
GKE, 1.13.7-gke.8 (VPC enabled)
Default HTTP(s) load balancer on Ingress
YAML files I used(same with the example previously mentioned),
envoy-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-configmap
labels:
app: fe
data:
config: |-
---
admin:
access_log_path: /dev/null
address:
socket_address:
address: 127.0.0.1
port_value: 9000
node:
cluster: service_greeter
id: test-id
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match:
path: "/echo.EchoServer/SayHello"
route: { cluster: local_grpc_endpoint }
http_filters:
- name: envoy.lua
config:
inline_code: |
package.path = "/etc/envoy/lua/?.lua;/usr/share/lua/5.1/nginx/?.lua;/etc/envoy/lua/" .. package.path
function envoy_on_request(request_handle)
if request_handle:headers():get(":path") == "/_ah/health" then
local headers, body = request_handle:httpCall(
"local_admin",
{
[":method"] = "GET",
[":path"] = "/clusters",
[":authority"] = "local_admin"
},"", 50)
str = "local_grpc_endpoint::127.0.0.1:50051::health_flags::healthy"
if string.match(body, str) then
request_handle:respond({[":status"] = "200"},"ok")
else
request_handle:logWarn("Envoy healthcheck failed")
request_handle:respond({[":status"] = "503"},"unavailable")
end
end
end
- name: envoy.router
typed_config: {}
tls_context:
common_tls_context:
tls_certificates:
- certificate_chain:
filename: "/data/certs/tls.crt"
private_key:
filename: "/data/certs/tls.key"
clusters:
- name: local_grpc_endpoint
connect_timeout: 0.05s
type: STATIC
http2_protocol_options: {}
lb_policy: ROUND_ROBIN
common_lb_config:
healthy_panic_threshold:
value: 50.0
health_checks:
- timeout: 1s
interval: 5s
interval_jitter: 1s
no_traffic_interval: 5s
unhealthy_threshold: 1
healthy_threshold: 3
grpc_health_check:
service_name: "echo.EchoServer"
authority: "server.domain.com"
hosts:
- socket_address:
address: 127.0.0.1
port_value: 50051
- name: local_admin
connect_timeout: 0.05s
type: STATIC
lb_policy: ROUND_ROBIN
hosts:
- socket_address:
address: 127.0.0.1
port_value: 9000
fe-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: fe-deployment
labels:
app: fe
spec:
replicas: 2
template:
metadata:
labels:
app: fe
spec:
containers:
- name: fe-envoy
image: envoyproxy/envoy:latest
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /_ah/health
scheme: HTTPS
port: fe
readinessProbe:
httpGet:
path: /_ah/health
scheme: HTTPS
port: fe
ports:
- name: fe
containerPort: 8080
protocol: TCP
command: ["/usr/local/bin/envoy"]
args: ["-c", "/data/config/envoy.yaml"]
volumeMounts:
- name: certs-volume
mountPath: /data/certs
- name: envoy-config-volume
mountPath: /data/config
- name: fe-container
image: salrashid123/grpc_only_backend # This runs gRPC secure/insecure server using port argument(:50051). Port 50051 is also exposed on Dockerfile.
imagePullPolicy: Always
ports:
- containerPort: 50051
protocol: TCP
command: ["/grpc_server"]
args: ["--grpcport", ":50051", "--insecure"]
volumes:
- name: certs-volume
secret:
secretName: fe-secret
- name: envoy-config-volume
configMap:
name: envoy-configmap
items:
- key: config
path: envoy.yaml
fe-srv-ingress.yaml
---
apiVersion: v1
kind: Service
metadata:
name: fe-srv-ingress
labels:
type: fe-srv
annotations:
service.alpha.kubernetes.io/app-protocols: '{"fe":"HTTP2"}'
cloud.google.com/neg: '{"ingress": true}'
spec:
type: NodePort
ports:
- name: fe
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: fe
fe-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fe-ingress
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- hosts:
- server.domain.com
secretName: fe-secret
rules:
- host: server.domain.com
http:
paths:
- path: /echo.EchoServer/*
backend:
serviceName: fe-srv-ingress
servicePort: 8080
I had to allow any traffic from IP range specified as health checks source in documentation pages - 130.211.0.0/22, 35.191.0.0/16 , seen it here: https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg
And I had to allow it for default network and for the new network (regional) the cluster lives in.
When I added these firewall rules, health checks could reach the pods exposed in NEG used as a regional backend within a backend service of our Http(s) load balancer.
May be there is a more restrictive firewall setup, but I just cut the corners and allowed anything from IP range declared to be healthcheck source range from the page referenced above.
GCP committer says this is kind of bug, so there is no way to fix this at this time.
Related issue is this, and pull request is now progressing.

Istio allowing all outbound traffic

So putting everything in detail here for better clarification. My service consist of following attributes in dedicated namespace (Not using ServiceEntry)
Deployment (1 deployment)
Configmaps (1 configmap)
Service
VirtualService
GW
Istio is enabled in namespace and when I create / run deployment it create 2 pods as it should. Now as stated in issues subject I want to allow all outgoing traffic for deployment because my serives needs to connect with 2 service discovery server:
vault running on port 8200
spring config server running on http
download dependencies and communicate with other services (which are not part of vpc/ k8)
Using following deployment file will not open outgoing connections. Only thing works is simple https request on port 443 like when i run curl https://google.com its success but no response on curl http://google.com Also logs showing connection with vault is not establishing as well.
I have used almost all combinations in deployment but non of them seems to work. Anything I am missing or doing this in wrong way? would really appreciate contributions in this :)
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: my-application-service
name: my-application-service-deployment
namespace: temp-nampesapce
annotations:
traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0
spec:
replicas: 1
template:
metadata:
labels:
app: my-application-service-deployment
spec:
containers:
- envFrom:
- configMapRef:
name: my-application-service-env-variables
image: image.from.dockerhub:latest
name: my-application-service-pod
ports:
- containerPort: 8080
name: myappsvc
resources:
limits:
cpu: 700m
memory: 1.8Gi
requests:
cpu: 500m
memory: 1.7Gi
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-application-service-ingress
namespace: temp-namespace
spec:
hosts:
- my-application.mydomain.com
gateways:
- http-gateway
http:
- route:
- destination:
host: my-application-service
port:
number: 80
kind: Service
apiVersion: v1
metadata:
name: my-application-service
namespace: temp-namespace
spec:
selector:
app: api-my-application-service-deployment
ports:
- port: 80
targetPort: myappsvc
protocol: TCP
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
namespace: temp-namespace
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*.mydomain.com"
Namespace with istio enabled:
Name: temp-namespace
Labels: istio-injection=enabled
Annotations: <none>
Status: Active
No resource quota.
No resource limits.
Describe pods showing that istio and sidecare is working.
Name: my-application-service-deployment-fb897c6d6-9ztnx
Namespace: temp-namepsace
Node: ip-172-31-231-93.eu-west-1.compute.internal/172.31.231.93
Start Time: Sun, 21 Oct 2018 14:40:26 +0500
Labels: app=my-application-service-deployment
pod-template-hash=964537282
Annotations: sidecar.istio.io/status={"version":"2e0c897425ef3bd2729ec5f9aead7c0566c10ab326454e8e9e2b451404aee9a5","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs...
Status: Running
IP: 100.115.0.4
Controlled By: ReplicaSet/my-application-service-deployment-fb897c6d6
Init Containers:
istio-init:
Container ID: docker://a47003a092ec7d3dc3b1d155bca0ec53f00e545ad1b70e1809ad812e6f9aad47
Image: docker.io/istio/proxy_init:1.0.2
Image ID: docker-pullable://istio/proxy_init#sha256:e16a0746f46cd45a9f63c27b9e09daff5432e33a2d80c8cc0956d7d63e2f9185
Port: <none>
Host Port: <none>
Args:
-p
15001
-u
1337
-m
REDIRECT
-i
*
-x
-b
8080,
-d
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 21 Oct 2018 14:40:26 +0500
Finished: Sun, 21 Oct 2018 14:40:26 +0500
Ready: True
Restart Count: 0
Environment: <none>
Mounts: <none>
Containers:
my-application-service-pod:
Container ID: docker://1a30a837f359d8790fb72e6b8fda040e121fe5f7b1f5ca47a5f3732810fd4f39
Image: image.from.dockerhub:latest
Image ID: docker-pullable://848569320300.dkr.ecr.eu-west-1.amazonaws.com/k8_api_env#sha256:98abee8d955cb981636fe7a81843312e6d364a6eabd0c3dd6b3ff66373a61359
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 21 Oct 2018 14:40:28 +0500
Ready: True
Restart Count: 0
Limits:
cpu: 700m
memory: 1932735283200m
Requests:
cpu: 500m
memory: 1825361100800m
Environment Variables from:
my-application-service-env-variables ConfigMap Optional: false
Environment:
vault.token: <set to the key 'vault_token' in secret 'vault.token'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rc8kc (ro)
istio-proxy:
Container ID: docker://3ae851e8ded8496893e5b70fc4f2671155af41c43e64814779935ea6354a8225
Image: docker.io/istio/proxyv2:1.0.2
Image ID: docker-pullable://istio/proxyv2#sha256:54e206530ba6ca9b3820254454e01b7592e9f986d27a5640b6c03704b3b68332
Port: <none>
Host Port: <none>
Args:
proxy
sidecar
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
--serviceCluster
my-application-service-deployment
--drainDuration
45s
--parentShutdownDuration
1m0s
--discoveryAddress
istio-pilot.istio-system:15007
--discoveryRefreshDelay
1s
--zipkinAddress
zipkin.istio-system:9411
--connectTimeout
10s
--statsdUdpAddress
istio-statsd-prom-bridge.istio-system:9125
--proxyAdminPort
15000
--controlPlaneAuthPolicy
NONE
State: Running
Started: Sun, 21 Oct 2018 14:40:28 +0500
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
POD_NAME: my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name)
POD_NAMESPACE: temp-namepsace (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
Mounts:
/etc/certs/ from istio-certs (ro)
/etc/istio/proxy from istio-envoy (rw)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-rc8kc:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rc8kc
Optional: false
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
istio-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.default
Optional: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container
Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "istio-certs"
Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "default-token-rc8kc"
Normal SuccessfulMountVolume 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal MountVolume.SetUp succeeded for volume "istio-envoy"
Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "docker.io/istio/proxy_init:1.0.2" already present on machine
Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container
Normal Scheduled 3m default-scheduler Successfully assigned my-application-service-deployment-fb897c6d6-9ztnx to ip-172-42-231-93.eu-west-1.compute.internal
Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "image.from.dockerhub:latest" already present on machine
Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container
Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container
Normal Pulled 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Container image "docker.io/istio/proxyv2:1.0.2" already present on machine
Normal Created 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Created container
Normal Started 3m kubelet, ip-172-31-231-93.eu-west-1.compute.internal Started container
Issue was that I tried to adding sidecar in deployment not in pod by adding in pod resolved the issue. Got help from here:
https://github.com/istio/istio/issues/9304

kube-dns keeps restarting with kubenetes on coreos

I have Kubernetes installed on Container Linux by CoreOS alpha (1353.1.0)
using hyperkube v1.5.5_coreos.0 using my fork of coreos-kubernetes install scripts at https://github.com/kfirufk/coreos-kubernetes.
I have two ContainerOS machines.
coreos-2.tux-in.com resolved as 192.168.1.2 as controller
coreos-3.tux-in.com resolved as 192.168.1.3 as worker
kubectl get pods --all-namespaces returns
NAMESPACE NAME READY STATUS RESTARTS AGE
ceph ceph-mds-2743106415-rkww4 0/1 Pending 0 1d
ceph ceph-mon-check-3856521781-bd6k5 1/1 Running 0 1d
kube-lego kube-lego-3323932148-g2tf4 1/1 Running 0 1d
kube-system calico-node-xq6j7 2/2 Running 0 1d
kube-system calico-node-xzpp2 2/2 Running 4560 1d
kube-system calico-policy-controller-610849172-b7xjr 1/1 Running 0 1d
kube-system heapster-v1.3.0-beta.0-2754576759-v1f50 2/2 Running 0 1d
kube-system kube-apiserver-192.168.1.2 1/1 Running 0 1d
kube-system kube-controller-manager-192.168.1.2 1/1 Running 1 1d
kube-system kube-dns-3675956729-r7hhf 3/4 Running 3924 1d
kube-system kube-dns-autoscaler-505723555-l2pph 1/1 Running 0 1d
kube-system kube-proxy-192.168.1.2 1/1 Running 0 1d
kube-system kube-proxy-192.168.1.3 1/1 Running 0 1d
kube-system kube-scheduler-192.168.1.2 1/1 Running 1 1d
kube-system kubernetes-dashboard-3697905830-vdz23 1/1 Running 1246 1d
kube-system monitoring-grafana-4013973156-m2r2v 1/1 Running 0 1d
kube-system monitoring-influxdb-651061958-2mdtf 1/1 Running 0 1d
nginx-ingress default-http-backend-150165654-s4z04 1/1 Running 2 1d
so I can see that kube-dns-782804071-h78rf keeps restarting.
kubectl describe pod kube-dns-3675956729-r7hhf --namespace=kube-system returns:
Name: kube-dns-3675956729-r7hhf
Namespace: kube-system
Node: 192.168.1.2/192.168.1.2
Start Time: Sat, 11 Mar 2017 17:54:14 +0000
Labels: k8s-app=kube-dns
pod-template-hash=3675956729
Status: Running
IP: 10.2.67.243
Controllers: ReplicaSet/kube-dns-3675956729
Containers:
kubedns:
Container ID: rkt://f6480fe7-4316-4e0e-9483-0944feb85ea3:kubedns
Image: gcr.io/google_containers/kubedns-amd64:1.9
Image ID: rkt://sha512-c7b7c9c4393bea5f9dc5bcbe1acf1036c2aca36ac14b5e17fd3c675a396c4219
Ports: 10053/UDP, 10053/TCP, 10055/TCP
Args:
--domain=cluster.local.
--dns-port=10053
--config-map=kube-dns
--v=2
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
State: Running
Started: Sun, 12 Mar 2017 17:47:41 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Mar 2017 17:46:28 +0000
Finished: Sun, 12 Mar 2017 17:47:02 +0000
Ready: False
Restart Count: 981
Liveness: http-get http://:8080/healthz-kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbdp (ro)
Environment Variables:
PROMETHEUS_PORT: 10055
dnsmasq:
Container ID: rkt://f6480fe7-4316-4e0e-9483-0944feb85ea3:dnsmasq
Image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4.1
Image ID: rkt://sha512-8c5f8b40f6813bb676ce04cd545c55add0dc8af5a3be642320244b74ea03f872
Ports: 53/UDP, 53/TCP
Args:
--cache-size=1000
--no-resolv
--server=127.0.0.1#10053
--log-facility=-
Requests:
cpu: 150m
memory: 10Mi
State: Running
Started: Sun, 12 Mar 2017 17:47:41 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Mar 2017 17:46:28 +0000
Finished: Sun, 12 Mar 2017 17:47:02 +0000
Ready: True
Restart Count: 981
Liveness: http-get http://:8080/healthz-dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbdp (ro)
Environment Variables: <none>
dnsmasq-metrics:
Container ID: rkt://f6480fe7-4316-4e0e-9483-0944feb85ea3:dnsmasq-metrics
Image: gcr.io/google_containers/dnsmasq-metrics-amd64:1.0.1
Image ID: rkt://sha512-ceb3b6af1cd67389358be14af36b5e8fb6925e78ca137b28b93e0d8af134585b
Port: 10054/TCP
Args:
--v=2
--logtostderr
Requests:
memory: 10Mi
State: Running
Started: Sun, 12 Mar 2017 17:47:41 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Mar 2017 17:46:28 +0000
Finished: Sun, 12 Mar 2017 17:47:02 +0000
Ready: True
Restart Count: 981
Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbdp (ro)
Environment Variables: <none>
healthz:
Container ID: rkt://f6480fe7-4316-4e0e-9483-0944feb85ea3:healthz
Image: gcr.io/google_containers/exechealthz-amd64:v1.2.0
Image ID: rkt://sha512-3a85b0533dfba81b5083a93c7e091377123dac0942f46883a4c10c25cf0ad177
Port: 8080/TCP
Args:
--cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
--url=/healthz-dnsmasq
--cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
--url=/healthz-kubedns
--port=8080
--quiet
Limits:
memory: 50Mi
Requests:
cpu: 10m
memory: 50Mi
State: Running
Started: Sun, 12 Mar 2017 17:47:41 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 12 Mar 2017 17:46:28 +0000
Finished: Sun, 12 Mar 2017 17:47:02 +0000
Ready: True
Restart Count: 981
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbdp (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-zqbdp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zqbdp
QoS Class: Burstable
Tolerations: CriticalAddonsOnly=:Exists
No events.
which shows that kubedns-amd64:1.9 is in Ready: false
this is my kude-dns-de.yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
spec:
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.9
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthz-kubedns
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-map=kube-dns
# This should be set to v=2 only after the new image (cut from 1.5) has
# been released, otherwise we will flood the logs.
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4.1
livenessProbe:
httpGet:
path: /healthz-dnsmasq
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
- --log-facility=-
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 10Mi
- name: dnsmasq-metrics
image: gcr.io/google_containers/dnsmasq-metrics-amd64:1.0.1
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 10Mi
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:v1.2.0
resources:
limits:
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
args:
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- --url=/healthz-dnsmasq
- --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
- --url=/healthz-kubedns
- --port=8080
- --quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default
and this is my kube-dns-svc.yaml:
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.3.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
any information regarding the issue would be greatly appreciated!
update
rkt list --full 2> /dev/null | grep kubedns shows:
744a4579-0849-4fae-b1f5-cb05d40f3734 kubedns gcr.io/google_containers/kubedns-amd64:1.9 sha512-c7b7c9c4393b running 2017-03-22 22:14:55.801 +0000 UTC 2017-03-22 22:14:56.814 +0000 UTC
journalctl -m _MACHINE_ID=744a45790849b1f5cb05d40f3734 provides:
Mar 22 22:17:58 kube-dns-3675956729-sthcv kubedns[8]: E0322 22:17:58.619254 8 reflector.go:199] pkg/dns/dns.go:145: Failed to list *api.Endpoints: Get https://10.3.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.3.0.1:443: connect: network is unreachable
I tried to add - --proxy-mode=userspace to /etc/kubernetes/manifests/kube-proxy.yaml but the results are the same.
kubectl get svc --all-namespaces provides:
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ceph ceph-mon None <none> 6789/TCP 1h
default kubernetes 10.3.0.1 <none> 443/TCP 1h
kube-system heapster 10.3.0.2 <none> 80/TCP 1h
kube-system kube-dns 10.3.0.10 <none> 53/UDP,53/TCP 1h
kube-system kubernetes-dashboard 10.3.0.116 <none> 80/TCP 1h
kube-system monitoring-grafana 10.3.0.187 <none> 80/TCP 1h
kube-system monitoring-influxdb 10.3.0.214 <none> 8086/TCP 1h
nginx-ingress default-http-backend 10.3.0.233 <none> 80/TCP 1h
kubectl get cs provides:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
my kube-proxy.yaml has the following content:
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
annotations:
rkt.alpha.kubernetes.io/stage1-name-override: coreos.com/rkt/stage1-fly
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.5.5_coreos.0
command:
- /hyperkube
- proxy
- --cluster-cidr=10.2.0.0/16
- --kubeconfig=/etc/kubernetes/controller-kubeconfig.yaml
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: "ssl-certs"
- mountPath: /etc/kubernetes/controller-kubeconfig.yaml
name: "kubeconfig"
readOnly: true
- mountPath: /etc/kubernetes/ssl
name: "etc-kube-ssl"
readOnly: true
- mountPath: /var/run/dbus
name: dbus
readOnly: false
volumes:
- hostPath:
path: "/usr/share/ca-certificates"
name: "ssl-certs"
- hostPath:
path: "/etc/kubernetes/controller-kubeconfig.yaml"
name: "kubeconfig"
- hostPath:
path: "/etc/kubernetes/ssl"
name: "etc-kube-ssl"
- hostPath:
path: /var/run/dbus
name: dbus
this is all the valuable information I could find. any ideas? :)
update 2
output of iptables-save on the controller ContainerOS at http://pastebin.com/2GApCj0n
update 3
I ran curl on the controller node
# curl https://10.3.0.1 --insecure
Unauthorized
means it can access it properly, i didn't add enough parameters for it to be authorized right ?
update 4
thanks to #jaxxstorm I removed calico manifests, updated their quay/cni and quay/node versions and reinstalled them.
now kubedns keeps restarting, but I think that now calico works. because for the first time it tries to install kubedns on the worker node and not on the controller node, and also when I rkt enter the kubedns pod and try to wget https://10.3.0.1 I get:
# wget https://10.3.0.1
Connecting to 10.3.0.1 (10.3.0.1:443)
wget: can't execute 'ssl_helper': No such file or directory
wget: error getting response: Connection reset by peer
which clearly shows that there is some kind of response. which is good right ?
now kubectl get pods --all-namespaces shows:
kube-system kube-dns-3675956729-ljz2w 4/4 Running 88 42m
so.. 4/4 ready but it keeps restarting.
kubectl describe pod kube-dns-3675956729-ljz2w --namespace=kube-system output at http://pastebin.com/Z70U331G
so it can't connect to http://10.2.47.19:8081/readiness, i'm guessing this is the ip of kubedns since it uses port 8081. don't know how to continue investigating this issue further.
thanks for everything!
Lots of great debugging info here, thanks!
This is the clincher:
# curl https://10.3.0.1 --insecure
Unauthorized
You got an unauthorized response because you didn't pass a client cert, but that's fine, it's not what we're after. This proves that kube-proxy is working as expected and is accessible. Your rkt logs:
Mar 22 22:17:58 kube-dns-3675956729-sthcv kubedns[8]: E0322 22:17:58.619254 8 reflector.go:199] pkg/dns/dns.go:145: Failed to list *api.Endpoints: Get https://10.3.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.3.0.1:443: connect: network is unreachable
Are indicating that the containers that there's network connectivity issues inside the containers, which indicates to me that you haven't configured container networking/CNI.
Please have a read through this document: https://coreos.com/rkt/docs/latest/networking/overview.html
You may also have to reconfigure calico, there's some more information that here: http://docs.projectcalico.org/master/getting-started/rkt/
kube-dns has a readiness probe that tries resolving trough the Service IP of kube-dns. Is it possible that there is a problem with your Service network?
Check out the answer and solution here:
kubernetes service IPs not reachable