I am trying to create an architecture where every deployment deploys with a cluster IP and the rule get automatically added to the ingress rule as a new path.
My initial thinking was to give the Deployment a ServiceAccount that has access to manage ingress rule and before the main pod would run, an init container would fetch the YAML and add the ruleset, while deleting maybe delete that as well?
But the more I think about it, more loopholes start coming to mind. For eg: What happens when 2 Deployment's start at the same time?
and things like that.
Any idea on how to tackle this would be appreciated.
My background: I am a cloud engineer trying to shift to DevOps, have beginner-intermediate level knowledge of Kubernetes.
I am trying to create an architecture where every deployment deploys with a cluster IP and the rule gets automatically added to the ingress rule as a new path.
Few options:
Init container (You figured this one out and mentioned it in your question)
ADd init container to your Deployment which will add the desired rule to your Ingress
Probes
Add a post-start probe which will be executed once your pod is running and kicking will update the Ingress rules
CronJob
Add a CronJob which will "scan" for changes and will update the Ingress again
Related
Summarize the problem:
Any way we can add an ENV to a pod or a new pod in kubernetes?
For example, I want to add HTTP_PROXY to many pods and the new pods it will generate in kubeflow 1.4. So these pods can be access to internet.
Describe what you’ve tried:
I searched and found istio maybe do that, but it's too complex for me.
The second, there are too many yamls in kubeflow, as to I cannot modify them one by one to use configmap or add ENV just in them.
So anyone has a good simle way to do this? Like doing this in kubernetes configuation.
Use "PodPreset" object to inject common environment variables and other params to all the matching pods.
Please follow below article
https://v1-19.docs.kubernetes.io/docs/tasks/inject-data-application/podpreset/
If PodPreset is indeed removed from v1.20, then you seem to need a webhook.
You will have to run an additional service in your cluster that will change the configuration of the pods.
Here is an example, on the basis of which I created my webhook, which changed the configuration of the pods in the cluster, in this example the developer used the logic adding a sidecar to the pod, but you can set your own to forward the required ENV:
https://github.com/morvencao/kube-mutating-webhook-tutorial/blob/master/medium-article.md
If we have some requirement to modify property of running pods, Which will be the recommeneded way and whats the reason.
I guess once a pod deployed as part of the deployment, we can modify the pods properties either by kubectl edit pod or by kubectl edit deploy.
Would like to understand is there any difference between these 2 actions. ?
Modify the Deployment not the Pod.
Why?
The Deployment describe the desired state for your pods. The Deployment controller continuously watches for the Deployment object in a control loop. It reads the desired pod state from the Deployment specification and try to ensure the state in the cluster. So, if you edit the pod and change something, the Deployment controller will overwrite the change in next resync because your modification is not present in the Deployment specification.
For the most part you can't edit the pods. In the API definition of a PodSpec, the containers and initContainers fields are both described as "Cannot be updated." Almost all of the interesting things in a Pod spec are in the Container sub-objects.
The corollary to this is that you can't "modify properties of running pods" for the most part; you can only delete and replace them with new pods with the properties you want. If you edit the pod template in a deployment spec, Kubernetes will do exactly that.
I setup a private K8S cluster with RKE 1.2.2 and so my K8S version is 1.19. We have some internal services, and it is necessary to access each other using custom FQDN instead of simple service names. As I searched the web, the only solution I found is adding rewrite records for CoreDNS ConfigMap described in this REF. However, this solution results in manual configuration, and I want to define a record automatically during service setup. Is there any solution for this automation? Does CoreDNS have such an API to add or delete rewrite records?
Note1: I also tried to mount the CoreDNS's ConfigMap and update it via another pod, but the content is mounted read-only.
Note2: Someone proposed calling kubectl get cm -n kube-system coredns -o yaml | sed ... | kubectl apply .... However, I want to automate it during service setup or in a pod or in an initcontainer.
Note3: I wish there were something like hostAliases for services, something called serviceAliases for internal services (ClusterIP).
Currently, there is no ready solution for this.
Only thing comes to my mind is to use MutatingAdmissionWebhook. It would need catch moment, when new Kubernetes service was created and then modify ConfigMap for CoreDNS as it's described in CoreDNS documentation.
After that, you would need to reload CoreDNS configuration to apply new configuration from ConfigMap. To achieve that, you can use reload plugin for CoreDNS. More details about this plugin can be found here.
Instead of above you can consider using sidecarContainer for CoreDNS, which will send SIGUSR1 signal to CoreDNS conatiner.
Example of this method can be found in this Github thread.
We are using secret as environment variables on pod, but every time we have updated on secrets, we are redeploying the pods to take changes effect. We are looking for a mechanism where Pods get restarted automatically whenever secrets gets updated. Any help on this?
Thanks in advance.
There are many ways to handle this.
First, use Deployment instead of "naked" Pods that are not managed. The Deployment will create new Pods for you, when the Pod template is changed.
Second, to manage Secrets may be a bit tricky. It would be great if you can use a setup where you can use Kustomize SecretGenerator - then each new Secret will get its unique name. In addition, that unique name is reflected to the Deployment automatically - and your pods will automatically be recreated when a Secret is changed - this match your origin problem. When Secret and Deployment is handled this way, you apply the changes with:
kubectl apply -k <folder>
If you mount your secrets to pod it will get updated automatically you don't have to restart your pod as mentioned here
Other approaches are staker reloader which can reload your deployments based on configs, secrets etc
There are multiple ways of doing this:
Simply restart the pod
this can be done manually, or,
you could use an operator provided by VMware carvel kapp controller (documentation), using kapp controller you can reload the secrets/ configmap without needing to restart the pods (which effectively runs helm template <package> on a periodic basis and applies the changes if it founds any differences in helm template), check out my design for reloading the log level without needing to restart the pod.
Using service bindings https://servicebinding.io/
Is there a way to reload currently running pods created by replicationcontroller to reapply newly created services?
Example:
I have a running pods created by ReplicationController config file. I have deleted a service called mongo-svc and recreated it again using different port. Is there a way for the pod's env file to be updated with the new IP and ports from the new mongo-svc?
You can restart pods by simply deleting them: if they are linked to a Replication controller, the RC will take care of restarting them
kubectl delete pod <your-pod-name>
if you have a couple pods, it's easy enougth to copy/paste the pod names, but if you have many pods it can become cumbersome.
So another way to delete pods and restart them is to scale the RC down to 0 instances and back up to the number you need.
kubectl scale --replicas=0 rc <your-rc>
kubectl scale --replicas=<n> rc <your-rc>
By-the-way, you may also want to look at 'rolling-updates' to do this in a more production friendly manner, but that implies updating the RC config.
If you want the same pod to have the new service, the clean answer is no. You could (I strongly suggest not to do this) run kubectl exec <pod-name> -c <containers> -- export <service env var name>=<service env var value>. But your best bet is to run kubectl delete <pod-name> and let your replication controller handle the work.
I've ran into a similar issue for services being ran outside of kubernetes, say a DB for instance, to address this I've been creating this https://github.com/cpg1111/kubongo which updates the service's endpoint without deleting the pods. That same idea can also be applied to other pods in kubernetes to automate the service update. Basically it watches a specific service, and when it's IP changes for whatever reason it updates all the pods without deleting them. This does use the same code as kubectl exec however it is automated, sanitizes input and ensures the export is executed on all pods.
What do you mean with 'reapply'?
The pods to which the services point are generally selected based on labels.In other words, you can add / remove labels from the pods to include / exclude them from a service.
Read here for more information about defining services: http://kubernetes.io/v1.1/docs/user-guide/services.html#defining-a-service
And here for more information about labels: http://kubernetes.io/v1.1/docs/user-guide/labels.html
Hope it helps!