We are using K8S in a managed Azure environment, Minikube in Ubuntu and a Rancher cluster built on on-prem machines and in general, our deployments take up to about 30 seconds to pull containers, run up and be ready. However, my latest attempt to create a deployment (on-prem) takes upwards of a minute and sometimes longer. It is a small web service which is very similar to our other deployments. The only (obvious) difference is the use of a startup probe and a liveness probe, although some of our other services do have probes, they are different though.
After removing Octopus deploy from the equation by extracting the yaml it was running and using kubectl, as soon as the (single) pod starts, I start reading the logs and as expected, the startup and liveness probes are called very quickly. Startup succeeds and the cluster starts calling the live probe, which also succeeds. However, if I use kubectl describe on the pod, it shows Initialized and PodScheduled as True but ContainersReady (there is one container) and Ready are both false for around a minute. I can't see what would cause this other than probe failures but these are logged as successful.
They eventually start and work OK but I don't know why they take so long.
kind: Deployment
apiVersion: apps/v1
metadata:
name: 'redirect-files-deployments-28775'
labels:
Octopus.Kubernetes.SelectionStrategyVersion: "SelectionStrategyVersion2"
OtherOctopusLabels
spec:
replicas: 1
selector:
matchLabels:
Octopus.Kubernetes.DeploymentName: 'redirect-files-deployments-28775'
template:
metadata:
labels:
Octopus.Kubernetes.SelectionStrategyVersion: "SelectionStrategyVersion2"
OtherOctopusLabels
spec:
containers:
- name: redirect-files
image: ourregistry.azurecr.io/microservices.redirectfiles:1.0.34
ports:
- name: http
containerPort: 80
protocol: TCP
env:
- removed connection strings etc
livenessProbe:
httpGet:
path: /api/version
port: 80
scheme: HTTP
successThreshold: 1
startupProbe:
httpGet:
path: /healthcheck
port: 80
scheme: HTTP
httpHeaders:
- name: X-SS-Authorisation
value: asdkjlkwe098sad0akkrweklkrew
initialDelaySeconds: 5
timeoutSeconds: 5
imagePullSecrets:
- name: octopus-feedcred-feeds-azure-container-registry
So the cause was the startup and/or liveness probes. When I removed them, the deployment time went from over a minute to 18 seconds, despite the logs proving that the probes were called successfully very quickly after containers were started.
At least I now have something more concrete to look for.
Related
I have been trying to debug a very odd delay in my K8S deployments. I have tracked it down to the simple reproduction below. What it appears is that if I set an initialDelaySeconds on a startup probe or leave it 0 and have a single failure, then the probe doesn't get run again for a while and ends up with atleast a 1-1.5 minute delay getting into Ready:true state.
I am running locally with Ubutunu 18.04 and microk8s v1.19.3 with the following versions:
kubelet: v1.19.3-34+a56971609ff35a
kube-proxy: v1.19.3-34+a56971609ff35a
containerd://1.3.7
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: microbot
name: microbot
spec:
replicas: 1
selector:
matchLabels:
app: microbot
strategy: {}
template:
metadata:
labels:
app: microbot
spec:
containers:
- image: cdkbot/microbot-amd64
name: microbot
command: ["/bin/sh"]
args: ["-c", "sleep 3; /start_nginx.sh"]
#args: ["-c", "/start_nginx.sh"]
ports:
- containerPort: 80
startupProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 0 # 5 also has same issue
periodSeconds: 1
failureThreshold: 10
successThreshold: 1
##livenessProbe:
## httpGet:
## path: /
## port: 80
## initialDelaySeconds: 0
## periodSeconds: 10
## failureThreshold: 1
resources: {}
restartPolicy: Always
serviceAccountName: ""
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: microbot
labels:
app: microbot
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: microbot
The issue is that if I have any delay in the startupProbe or if there is an initial failure, the pod gets into Initialized:true state but had Ready:False and ContainersReady:False. It will not change from this state for 1-1.5 minutes. I haven't found a pattern to the settings.
I left in the comment out settings as well so you can see what I am trying to get to here. What I have is a container starting up that has a service that will take a few seconds to get started. I want to tell the startupProbe to wait a little bit and then check every second to see if we are ready to go. The configuration seems to work, but there is a baked in delay that I can't track down. Even after the startup probe is passing, it does not transition the pod to Ready for more than a minute.
Is there some setting elsewhere in k8s that is delaying the amount of time before a Pod can move into Ready if it isn't Ready initially?
Any ideas are greatly appreciated.
Actually I made a mistake in comments, you can use initialDelaySeconds in startupProbe, but you should rather use failureThreshold and periodSeconds instead.
As mentioned here
Kubernetes Probes
Kubernetes supports readiness and liveness probes for versions ≤ 1.15. Startup probes were added in 1.16 as an alpha feature and graduated to beta in 1.18 (WARNING: 1.16 deprecated several Kubernetes APIs. Use this migration guide to check for compatibility).
All the probe have the following parameters:
initialDelaySeconds : number of seconds to wait before initiating
liveness or readiness probes
periodSeconds: how often to check the probe
timeoutSeconds: number of seconds before marking the probe as timing
out (failing the health check)
successThreshold : minimum number of consecutive successful checks
for the probe to pass
failureThreshold : number of retries before marking the probe as
failed. For liveness probes, this will lead to the pod restarting.
For readiness probes, this will mark the pod as unready.
So why should you use failureThreshold and periodSeconds?
consider an application where it occasionally needs to download large amounts of data or do an expensive operation at the start of the process. Since initialDelaySeconds is a static number, we are forced to always take the worst-case scenario (or extend the failureThreshold that may affect long-running behavior) and wait for a long time even when that application does not need to carry out long-running initialization steps. With startup probes, we can instead configure failureThreshold and periodSeconds to model this uncertainty better. For example, setting failureThreshold to 15 and periodSeconds to 5 means the application will get 15 (fifteen) x 5 (five) = 75s to startup before it fails.
Additionally if you need more informations take a look at this article on medium.
Quoted from kubernetes documentation about Protect slow starting containers with startup probes
Sometimes, you have to deal with legacy applications that might require an additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromising the fast response to deadlocks that motivated such a probe. The trick is to set up a startup probe with the same command, HTTP or TCP check, with a failureThreshold * periodSeconds long enough to cover the worse case startup time.
So, the previous example would become:
ports:
- name: liveness-port
containerPort: 8080
hostPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 1
periodSeconds: 10
startupProbe:
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 30
periodSeconds: 10
Thanks to the startup probe, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provide a fast response to container deadlocks. If the startup probe never succeeds, the container is killed after 300s and subject to the pod's restartPolicy.
I have a multi-container application: app + sidecar. Both containers suppose to be alive all the time but sidecar is not really that important.
Sidecar depends on external resource, if this resource is not available - sidecar crashes. And it takes entire pod down. Kubernetes tries to recreate pod and fails because sidecar now won't start.
But from my business logic perspective - crash of sidecar is absolutely normal. Having that sidecar is nice but not mandatory.
I don't want sidecar to take main app with it when it crashes.
What would be best Kubernetes-native way to achieve that?
Is it possible to tell kubernetes ignore failure of sidecar as a "false positive" event which is absolutely fine?
I can't find anything in pod specification what controls that behaviour.
My yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: myapp
spec:
volumes:
- name: logs-dir
emptyDir: {}
containers:
- name: myapp
image: ${IMAGE}
ports:
- containerPort: 9009
volumeMounts:
- name: logs-dir
mountPath: /usr/src/app/logs
resources:
limits:
cpu: "1"
memory: "512Mi"
readinessProbe:
initialDelaySeconds: 60
failureThreshold: 8
timeoutSeconds: 1
periodSeconds: 8
httpGet:
scheme: HTTP
path: /myapp/v1/admin-service/git-info
port: 9009
- name: graylog-sidecar
image: digiapulssi/graylog-sidecar:latest
volumeMounts:
- name: logs-dir
mountPath: /log
env:
- name: GS_TAGS
value: "[\"myapp\"]"
- name: GS_NODE_ID
value: "nodeid"
- name: GS_SERVER_URL
value: "${GRAYLOG_URL}"
- name: GS_LIST_LOG_FILES
value: "[\"/ctwf\"]"
- name: GS_UPDATE_INTERVAL
value: "10"
resources:
limits:
memory: "128Mi"
cpu: "0.1"
Warning: the answer that was flagged as "correct" does not appear to work.
Adding a Liveness Probe to the application container and setting Restart Policy to "Never", will lead to the Pod being stopped and never restarted in a scenario where the sidecar container has stopped and the application container has failed its Liveness Probe. This is a problem, since you DO want the restarts for the application container.
The problem should be solved as follows:
Tweak your sidecar container in the startup command to keep the main process running on failure of the application process. This could be done with an extra piece of scripting, e.g. by appending | tail -f /dev/null to the startup command.
Adding a Liveness Probe to the application container is in general a good idea. Keep in mind though that it only protects you against a scenario where your application process keeps running without your application being in a correct state. It will certainly not overwrite the restartPolicy:
livenessProbe: Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container, and the container is subjected to its restart policy. If a Container does not provide a liveness probe, the default state is Success.
Container Probes
A custom livenessProbe should help but for your scenario I would use the liveness for your main app container which is the myapp. Considering the fact that you don't care about the sidecare (as mentioned). I would set the pod restartPolicy to Never and then define a custom livelinessProbe for your main myapp. In this way the Pod will never restart doesn't matter which container is failed but when your myapp container's liveliness fails kubelet will restart the container! Ref below, link
Pod is running and has two Containers. Container 1 exits with failure.
Log failure event. If restartPolicy is: Always: Restart Container; Pod
phase stays Running. OnFailure: Restart Container; Pod phase stays
Running. Never: Do not restart Container; Pod phase stays Running.
so the updated (pseudo) yaml should look like below
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
...
spec:
...
restartPolicy: Never
containers:
- name: myapp
...
livenessProbe:
exec:
command:
- /bin/sh
- -c
- {{ your custom liveliness check command goes }}
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
...
- name: graylog-sidecar
...
Note: since I don't know your application therefore I cannot write the command but for my jboss server I use this (an example for you)
livenessProbe:
exec:
command:
- /bin/sh
- -c
- /opt/jboss/wildfly/bin/jboss-cli.sh --connect --commands="read-attribute
server-state"
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
The best solution which works for me is not to fail inside a sidecar container, but just log an error and rerun.
#!/usr/bin/env bash
set -e
# do some stuff which can fail on start
set +e # needed to not exit if command fails
while ! command; do
echo "command failed - rerun"
done
This will always rerun the command if it fails, but exit if the command finished successfully.
You can define a custom livenessProbe for your sidecar to have greater failureThreshold / periodSeconds to accommodate what is considered acceptable failure rate in your environment, or simply ignore all failure.
Docs:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#probe-v1-core
kubectl explain deployment.spec.template.spec.containers.livenessProbe
I have a kubernetes cluster deployed with rke witch is composed of 3 nodes in 3 different servers and in those server there is 1 pod which is running yatsukino/healthereum which is a personal modification of ethereum/client-go:stable .
The problem is that I'm not understanding how to add an external ip to send request to the pods witch are
My pods could be in 3 states:
they syncing the ethereum blockchain
they restarted because of a sync problem
they are sync and everything is fine
I don't want my load balancer to transfer requests to the 2 first states, only the third point consider my pod as up to date.
I've been searching in the kubernetes doc but (maybe because a miss understanding) I only find load balancing for pods inside a unique node.
Here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: goerli
name: goerli-deploy
spec:
replicas: 3
selector:
matchLabels:
app: goerli
template:
metadata:
labels:
app: goerli
spec:
containers:
- image: yatsukino/healthereum
name: goerli-geth
args: ["--goerli", "--datadir", "/app", "--ipcpath", "/root/.ethereum/geth.ipc"]
env:
- name: LASTBLOCK
value: "0"
- name: FAILCOUNTER
value: "0"
ports:
- containerPort: 30303
name: geth
- containerPort: 8545
name: console
livenessProbe:
exec:
command:
- /bin/sh
- /app/health.sh
initialDelaySeconds: 20
periodSeconds: 60
volumeMounts:
- name: app
mountPath: /app
initContainers:
- name: healthcheck
image: ethereum/client-go:stable
command: ["/bin/sh", "-c", "wget -O /app/health.sh http://my-bash-script && chmod 544 /app/health.sh"]
volumeMounts:
- name: app
mountPath: "/app"
restartPolicy: Always
volumes:
- name: app
hostPath:
path: /app/
The answers above explains the concepts, but about your questions anout services and external ip; you must declare the service, example;
apiVersion: v1
kind: Service
metadata:
name: goerli
spec:
selector:
app: goerli
ports:
- port: 8545
type: LoadBalancer
The type: LoadBalancer will assign an external address for in public cloud or if you use something like metallb. Check your address with kubectl get svc goerli. If the external address is "pending" you have a problem...
If this is your own setup you can use externalIPs to assign your own external ip;
apiVersion: v1
kind: Service
metadata:
name: goerli
spec:
selector:
app: goerli
ports:
- port: 8545
externalIPs:
- 222.0.0.30
The externalIPs can be used from outside the cluster but you must route traffic to any node yourself, for example;
ip route add 222.0.0.30/32 \
nexthop via 192.168.0.1 \
nexthop via 192.168.0.2 \
nexthop via 192.168.0.3
Assuming yous k8s nodes have ip 192.168.0.x. This will setup ECMP routes to your nodes. When you make a request from outside the cluster to 222.0.0.30:8545 k8s will load-balance between your ready PODs.
For loadbalancing and exposing your pods, you can use https://kubernetes.io/docs/concepts/services-networking/service/
and for checking when a pod is ready, you can use tweak your liveness and readiness probes as explained https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
for probes you might want to consider exec actions like execution a script that checks what is required and returning 0 or 1 dependent on status.
When a container is started, Kubernetes can be configured to wait for a configurable
amount of time to pass before performing the first readiness check. After that, it
invokes the probe periodically and acts based on the result of the readiness probe. If a
pod reports that it’s not ready, it’s removed from the service. If the pod then becomes
ready again, it’s re-added.
Unlike liveness probes, if a container fails the readiness check, it won’t be killed or
restarted. This is an important distinction between liveness and readiness probes.
Liveness probes keep pods healthy by killing off unhealthy containers and replacing
them with new, healthy ones, whereas readiness probes make sure that only pods that
are ready to serve requests receive them. This is mostly necessary during container
start up, but it’s also useful after the container has been running for a while.
I think you can use probe for your goal
I am trying to achieve zero-downtime deployment using Kubernetes. But every time I do the upgrade of the deployment using a new image, I am seeing 2-3 seconds of downtime. I am testing this using a Hello-World sort of application but still could not achieve it. I am deploying my application using the Helm charts.
Following the online blogs and resources, I am using Readiness-Probe and Rolling Update strategy in my Deployment.yaml file. But this gives me no success.
I have created a /health end-point which simply returns 200 status code as a check for readiness probe. I expected that after using readiness probes and RollingUpdate strategy in Kubernetes I would be able to achieve zero-downtime of my service when I upgrade the image of the container. The request to my service goes through an Amazon ELB.
Deployment.yaml file is as below:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: wine-deployment
labels:
app: wine-store
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app: wine-store
replicas: 2
template:
metadata:
labels:
app: wine-store
spec:
containers:
- name: {{ .Chart.Name }}
resources:
limits:
cpu: 250m
requests:
cpu: 200m
image: "my-private-image-repository-with-tag-and-version-goes-here-which-i-have-hidden-here"
imagePullPolicy: Always
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8089
name: testing-port
readinessProbe:
httpGet:
path: /health
port: 8089
initialDelaySeconds: 3
periodSeconds: 3
Service.yaml file:
apiVersion: v1
kind: Service
metadata:
name: wine-service
labels:
app: wine-store
spec:
ports:
- port: 80
targetPort: 8089
protocol: TCP
selector:
app: wine-store
Ingress.yaml file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wine-ingress
annotations:
kubernetes.io/ingress.class: public-nginx
spec:
rules:
- host: my-service-my-internal-domain.com
http:
paths:
- path: /
backend:
serviceName: wine-service
servicePort: 80
I expect the downtime to be zero when I am upgrading the image using helm upgrade command. Meanwhile, when the upgrade is in progress, I continuously hit my service using a curl command. This curl command gives me 503-service Temporarily un-available errors for 2-3 seconds and then again the service is up. I expect that this downtime does not happens.
This issue is caused by the Service VIP using iptables. You haven't done anything wrong - it's a limitation of current Kubernetes.
When the readiness probe on the new pod passes, the old pod is terminated and kube-proxy rewrites the iptables for the service. However, a request can hit the service after the old pod is terminated but before iptables has been updated resulting in a 503.
A simple workaround is to delay termination by using a preStop lifecycle hook:
lifecycle:
preStop:
exec:
command: ["/bin/bash", "-c", "sleep 10"]
It'd probably not relevant in this case, but implementing graceful termination in your application is a good idea. Intercept the TERM signal and wait for your application to finish handling any requests that it has already received rather than just exiting immediately.
Alternatively, more replicas, a low maxUnavailable and a high maxSurge will all reduce the probability of requests hitting a terminating pod.
For more info:
https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables
https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
Another answer mistakenly suggests you need a liveness probe. While it's a good idea to have a liveness probe, it won't effect the issue that you are experiencing. With no liveness probe defined the default state is Success.
In the context of a rolling deployment a liveness probe will be irrelevant - Once the readiness probe on the new pod passes the old pod will be sent the TERM signal and iptables will be updated. Now that the old pod is terminating, any liveness probe is irrelevant as its only function is to cause a pod to be restarted if the liveness probe fails.
Any liveness probe on the new pod again is irrelevant. When the pod is first started it is considered live by default. Only after the initialDelaySeconds of the liveness probe would it start being checked and, if it failed, the pod would be terminated.
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
Go around with blue-green deployments because even if pods are up it may take time for kube-proxy to forward requests to new POD IPs.
So setup new deployment, after all pods are up update service selector to new POD lables.
Follow: https://kubernetes.io/blog/2018/04/30/zero-downtime-deployment-kubernetes-jenkins/
The problem you describe indicate an issue with readiness probes. It is important to understand the differences between liveness and readiness probes. First of all you should implement and configure both!
The liveness probes are to check if the container is started and alive. If this isn’t the case, kubernetes will eventually restart the container.
The readiness probes in turn also check dependencies like database connections or other services your container is depending on to fulfill it’s work. As a developer you have to invest here more time into the implementation than just for the liveness probes. You have to expose a an endpoint which is also checking the mentioned dependencies when queried.
Your current configuration uses an health endpoint which is usually used by the liveness probes. It probably doesn’t check if your services is really ready to take traffic.
Kubernetes relies on the readiness probes. During an rolling update, it will keep the old container up and running until the new service declares that it is ready to take traffic. Therefore the readiness probes have to be implemented correctly.
Is it possible to fake a container to always be ready/live in kubernetes so that kubernetes thinks that the container is live and doesn't try to kill/recreate the container? I am looking for a quick and hacky solution, preferably.
Liveness and Readiness probes are not required by k8s controllers, you can simply remove them and your containers will be always live/ready.
If you want the hacky approach anyways, use the exec probe (instead of httpGet) with something dummy that always returns 0 as exit code. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
livenessProbe:
exec:
command:
- touch
- /tmp/healthy
readinessProbe:
exec:
command:
- touch
- /tmp/healthy
I'd like to add background contextual information about why
/ how this can be useful to real world applications.
Also by pointing out some additional info about why this question is useful I can come up with an even better answer.
First off why might you want to implement a fake startup / readiness / liveness probe?
Let's say you have a custom containerized application, you're in a rush so you go live without any liveness or readiness probes.
Scenario 1:
You have a deployment with 1 replica, but you notice that whenever you go to update your app (push a new version via a rolling update), your monitoring platform reports occasionally 400, 500, and timeout errors during the rolling update. Post update you're at 1 replica and the errors go away.
Scenario 2:
You have enough traffic to warrant autoscaling and multiple replicas. You consistently get 1-3% errors, and 97% success.
Why are you getting errors in both scenarios?
Let's say it takes 1 minute to finish booting up / be ready to receive traffic. If you don't have readiness probes then newly spawned instances of your container will receive traffic before they've finished booting up / become ready to receive traffic. So the newly spawned instances are probably causing temporary 400, 500, and timeout errors.
How to fix:
You can fix the occasional errors in Scenario 1 and 2 by adding a readiness probe with an initialDelaySeconds (or startup probe), basically something that waits long enough for your container app to finish booting up.
Now the correct and proper best practice thing to do is to write a /health endpoint that properly reflects the health of your app. But writing an accurate healthcheck endpoint can take time. In many cases you can get the same end result (make the errors go away), without the effort of creating a /health endpoint by faking it, and just adding a wait period that waits for your app to finish booting up before sending traffic to it. (again /health is best practice, but for the ain't nobody got time for that crowd, faking it can be a good enough stop gap solution)
Below is a better version of a fake readiness probe:
Also here's why it's better
exec based liveness probes don't work in 100% of cases, they assume shell exists on the container, and that commands exist on the container. There's scenarios where hardened containers don't have things like a shell or touch command.
httpGet, tcpSocket, and grcp liveness probes are done from the perspective of the node running kubelet (the kubernetes agent) so they don't depend on the software installed in the container and should work in on hardened containers that are missing things like touch command or even scratch container. (In other words this soln should work in 100% of cases vs 99% of the time)
An alternative to startup probe is to use initialDelaySeconds with a readiness Probe, but that creates unnecessary traffic compared to a startup probe that runs once. (Again this isn't the best solution in terms of accuracy/fastest possible startup time, but often a good enough solution that's very practical.)
Run my example in a cluster and you'll see it's not ready for 60 seconds, then becomes ready after 60 seconds.
Since this is a fake probe it's pointless to use readiness/liveness probe, just go with startup probe as that will cut down on unnecessary traffic.
In the absence of a readiness probe the startup probe will have the effect of a readiness probe (block it from being ready until the probe passes, but only during initial start up)
apiVersion: apps/v1
kind: Deployment
metadata:
name: useful-hack
labels:
app: always-true-tcp-probe
spec:
replicas: 1
strategy:
type: Recreate #dev env fast feedback loop optimized value, don't use in prod
selector:
matchLabels:
app: always-true-tcp-probe
template:
metadata:
labels:
app: always-true-tcp-probe
spec:
containers:
- name: nginx
image: nginx:1.7.9
startupProbe:
tcpSocket:
host: 127.0.0.1 #Since kubelet does the probes, this is node's localhost, not pod's localhost
port: 10250 #worker node kubelet listening port
successThreshold: 1
failureThreshold: 2
initialDelaySeconds: 60 #wait 60 sec before starting the probe
Additional Notes:
The above example keeps traffic within the LAN this has several benefits.
It'll work in internet disconnected environments.
It won't incur egress network charges
The below example will only work for internet connected environments and isn't too bad for a startup probe, but would be a bad idea for a readiness / liveness probe as it could clog the NAT GW bandwidth, I'm only including it to point out something of interest.
startupProbe:
httpGet:
host: google.com #default's to pod IP
path: /
port: 80
scheme: HTTP
successThreshold: 1
failureThreshold: 2
initialDelaySeconds: 60
---
startupProbe:
tcpSocket:
host: 1.1.1.1 #CloudFlare
port: 53 #DNS
successThreshold: 1
failureThreshold: 2
initialDelaySeconds: 60
The interesting bit:
Remember I said "httpGet, tcpSocket, and grcp liveness probes are done from the perspective of the node running kubelet (the kubernetes agent)." Kubelet runs on the worker node's host OS, which is configured for upstream DNS, in other words it doesn't have access to inner cluster DNS entries that kubedns is aware of. So you can't specify Kubernetes service names in these probes.
Additionally Kubernetes Service IPs won't work for the probes either since they're VIPs (Virtual IPs) that only* exist in iptables (*most cases).