Starting a container/pod after running the istio-proxy - kubernetes

I am trying to implement a service mesh to a service with Kubernetes using Istio and Envoy. I was able to set up the service and istio-proxy but I am not able to control the order in which the container and istio-proxy are started.
My container is the first started and tries to access an external resource via TCP but at that time, istio-proxy has not completely loaded and so does the ServiceEntry for the external resource
I tried adding a panic in my service and also tried with a sleep of 5 seconds before accessing the external resource.
Is there a way that I can control the order of these?

On istio version 1.7.X and above you can add configuration option values.global.proxy.holdApplicationUntilProxyStarts, which causes the sidecar injector to inject the sidecar at the start of the pod’s container list and configures it to block the start of all other containers until the proxy is ready. This option is disabled by default.
According to https://istio.io/latest/news/releases/1.7.x/announcing-1.7/change-notes/

I don't think you can control the order other than listing the containers in a particular order in your pod spec. So, I recommend you configure a Readiness Probe so that you are pod is not ready until your service can send some traffic to the outside.

Github issue here:
Support startup dependencies between containers on the same Pod
We're currently recommending that developers solve this problem
themselves by running a startup script on their application container
which delays application startup until Envoy has received its initial
configuration. However, this is a bit of a hack and requires changes
to every one of the developer's containers.

Related

Access an application running on a random port in Kubernetes

My application is running within a pod container in a kubernetes cluster. Every time it is started in the container it allocates a random port. I would like to access my application from outside (from another pod or node for example) but since it allocates a random port I cannot create a serivce (NodePort or LoadBalancer) to map the application port to a specific port to be able to access it.
What are the options to handle this case in a kubernetes cluster?
Not supported, checkout the issue here. Even with docker, if your range is overly broad, you can hit issue as well.
One option to solve this would be to use a dynamically configured sidecar proxy that takes care of routing the pods traffic from a fixed port to the dynamic port of the application.
This has the upside that the service can be scaled even if all instances have different ports.
But this also has the potentially big downside that every request has an added overhead due to the extra network step and even if it's just some milliseconds this can have quite an impact depending on how the application works.
Regarding the dynamic configuration of the proxy you could e.g. use shared pod volumes.
Edit:
Containers within a pod are a little bit special. They are able to share predefined volumes and their network namespace.
For shared volumes this means. you are able to define a volume e.g. of type emptyDir and mount it in both containers. The result is that you can share information between both containers by writing them into that specific volume in the first pod and reading the information in the second.
For networking this makes communication between containers of one pod easier because you can use the loopback to communicate with between your containers. In your case this means the sidecar proxy container can call your service by calling localhost:<dynamic-port>.
For further information take a look at the official docs.
So how is this going to help you?
You can use a proxy like envoy and configure it to listen to dynamic configuration changes. The source for the dynamic configuration should be a shared volume between both your application container and the sidecar proxy container.
After your application started and allocated the dynamic port you can now automatically create the configuration for the envoy proxy and save it in the shared volume. The same source envoy is listening to due to the aforementioned configuration.
Envoy itself acts as a reverse proxy and can listen statically on port 8080 (or any other port you like) and routes the incoming network traffic to your application with dynamic port by calling localhost:<dynamic-port>.
Unfortunately I don't have any predefined configuration for this use case but if you need some inspiration you can take a look at istio - they use something quite similar.
As gohm'c mentioned there is no built in way to do this.
As a workaround you can run a script to adjust the service:
Start the application and retrieve the choosen random port from the application
Modify the Kubernetes service, loadbalancer, etc. with the new port
The downside of this is that if the port changes there will be short delay to update the service afterwards.
How about this:
Deploy and start your application in a Kubernetes cluster.
run kubectl exec <pod name here> -- netstat -tulpn | grep "application name" to identify the port number associated with your application.
run kubectl port-forward <pod name here> <port you like>:<random application port> to access the pod of your application.
Would this work for you?
Edit. This is very similar to #Lukas Eichler's response.

Reusing readiness probe of a dependent service to control start up of a service

I have a back-end service that I will control using Kubernetes (with a Helm chart). This back-end service connects to a data-base (MonogoDB, it so happens). There is no point in starting up the back-end service until the data-base is ready to receive a connection (the back-end will handle the missing data-base, by retrying, but it wastes resources and fills the log file with distracting error messages).
To do this I believe I could add an init container to my back-end, and have that init container wait (or poll) until the data-base is ready. It seems this is one of the intended uses of init containers
Because init containers run to completion before any app containers start, init containers offer a mechanism to block or delay app container startup until a set of preconditions are met.
That is, have the init container of my service do the same operations as the readiness probe of the data-base. That in turn means copying and pasting code from the configuration (Helm chart) of the data-base to the configuration (or Helm chart) of my back-end. Not ideal. Is there an easier way? Is there a way I can declare to Kubernetes that my service should not be started until the data-base is known to be ready?
If I have understood you correctly.
From Mongo DB point of view everything is working as expected using readinessprobe:
As per documentation:
The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.
From the back-end point of view, you can use initcontainer - the one drawback is that when your back-end service will start once (after successfully initcontainer initialization) the DB pod will be ready to serve the traffic but when it will fail the back-end service will filling your errors messages - as previously.
So what I can propose to use solution described here.
In your back-end deployment you can combine additional readinessprobes to verify if your primary deployment is ready to serve the traffic you can use sidecar container to handle this process (verifying connection to the primary db service and writing f.e. info in static file per specific period of time). As an example please take a look at EKG library with mongoDBCheck sidecar.
Or just simply exec command as the result of your script running inside your sidecar container:
readinessProbe:
exec:
command:
- find
- alive.txt
- -mmin
- '-1'
initialDelaySeconds: 5
periodSeconds: 15
Hope this help

Application container unable to access network before sidecar ready

I was trying fortio server/client application on istio. I used istoctl for injecting istio dependency and my serer pod was came up fine. But client pod was giving connection refused error due to proxy sidecar is not yet ready to handle connection request of client. Please help me addressing this issue. For reference attaching my yaml files.
This is by design and there is no way around it.
The part responsible for configuration of the iptables for capturing the traffic is run as an init container, which ensures that the required rules are in place before any of the normal pod containers start up. If you use istio for all the traffic, then until it's container is ready, no network traffic will reach in/out of the container.
You should make sure your application handles this right. Apps should be able to withstand unavailability of it's dependencies for a time, both on startup and during operation. In worst case you can introduce your own handling in form of ie. custom entrypoint that awaits for communication to be up.

Specify scheduling order of a Kubernetes DaemonSet

I have Consul running in my cluster and each node runs a consul-agent as a DaemonSet. I also have other DaemonSets that interact with Consul and therefore require a consul-agent to be running in order to communicate with the Consul servers.
My problem is, if my DaemonSet is started before the consul-agent, the application will error as it cannot connect to Consul and subsequently get restarted.
I also notice the same problem with other DaemonSets, e.g Weave, as it requires kube-proxy and kube-dns. If Weave is started first, it will constantly restart until the kube services are ready.
I know I could add retry logic to my application, but I was wondering if it was possible to specify the order in which DaemonSets are scheduled?
Kubernetes itself does not provide a way to specific dependencies between pods / deployments / services (e.g. "start pod A only if service B is available" or "start pod A after pod B").
The currect approach (based on what I found while researching this) seems to be retry logic or an init container. To quote the docs:
They run to completion before any app Containers start, whereas app Containers run in parallel, so Init Containers provide an easy way to block or delay the startup of app Containers until some set of preconditions are met.
This means you can either add retry logic to your application (which I would recommend as it might help you in different situations such as a short service outage) our you can use an init container that polls a health endpoint via the Kubernetes service name until it gets a satisfying response.
retry logic is preferred over startup dependency ordering, since it handles both the initial bringup case and recovery from post-start outages

how to recover k8s default-pool become registered

I am trying deny all egress on firewall rule
then, test create a container
finally I expect this operation will fail
But... my question is "how to recover the nodes to become registered"?
Has some command like gcloud container cluster repair [NAME]?
simply put, this is not possible. Kubelet needs perpetual connection to kubernetes api server, and it is the kubelet that initiates this connection in the first place. When the node registers it self you're not done with the connectivity requirement, as kubelet will watch resources on API to ie. notice and act when a new pod is scheduled for this node.
Mind that you also need the connectivity from API server to kubelet for example for functionalities like kubectl exec, proxy or port-forward. Your monitoring will probably need to connect to kubelet exposed metrics as well as maybe something like prometheus-node-exporter.
The bottom line is, that you can not isolate the node completely. Pods are a different story though. To get detailed control over pod traffic you might want to look into Network Policies and service mesh solutions like Istio