callback-method like Options in kubernetes API - kubernetes

I have been working on kubernetes REST API calls to create deployments and services using python client. Now the scenario is that i have to create deployment and when the pods get ready i have to tell users that their deployment is ready using some callback method. I can achieve this using cli like
watch kubectl describe pod <pod-name>
and looking into pod status.
But how can i implement a call-back function which is called when pod status is changed e.g from container creating -> ready.
Any help would be appreciated.

There is a Python Kubernetes Client Library
pip install kubernetes
from kubernetes import client, config
def get_pods(name, exact=False, namespace='default'):
# TODO check if this could be created once in an object.
config.load_kube_config(os.path.join(os.environ["HOME"], '.kube/config'))
v1 = client.CoreV1Api()
pod_list = v1.list_namespaced_pod(namespace)
if exact:
relevant_pods = [pod for pod in pod_list.items if name == pod.metadata.name]
else:
relevant_pods = [pod for pod in pod_list.items if name in pod.metadata.name]
return relevant_pods
You can use Airflow for most use cases
https://kubernetes.io/blog/2018/06/28/airflow-on-kubernetes-part-1-a-different-kind-of-operator/
I wrote a library which reacts to kubernetes events and more.
It is reactive meaning it is directly opposed to the declarative style of Helm and is a debuggable python deployment tool to replace Helm.
https://github.com/hamshif/Wielder
pip install wielder
To use open source examples
https://github.com/hamshif/wield-services
and in tandem with Apache Airflow
https://github.com/hamshif/dags/tree/6daf6313d35824b58efa7f61f90e30a169946532

I guess you could watch the events on the namespace where you are deploying the application, and react to that
Events such as the ones you saw at the end of kubectl describe pod are persisted in etcd and provide high-level information on what is happening in the cluster. To list all events you can use kubectl get events

Related

Create kubernetes events using kubernetes event api

I am pretty new in the kubernetes field, and have specific interest in kubernetes eventing. What I did was creating a K8 cluster, assign PODs to that cluster, and type kubectl get events command to see the corresponding events. Now for my work, I need to explore how can I create an K8 event resource using the eveting API provided here, so that using the kubectl get events command I can see the event stored in etcd. But as I mentioned I don't know K8 that deep, I'm struggling to make this api work.
N.B. I had a look into knative eventing, but seems to be the eventing feature Knative provides, is different than the K8 eveting, as I can't see the Knative events in the kubectl get events command. (please correct me if I am wrong).
Thanks in advance.
Usually events are created in Kubernetes as a way of informing of relevant status changes of an object.
Creating an event is as simple as:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Event
metadata:
name: myevent
namespace: default
type: Warning
message: 'not sure if this what you want to do'
involvedObject:
kind: someObject
EOF
But usually this is done programmatically, using the Kubernetes client library of your choice and the core (v1) API group.
Knative Eventing is platform that enables Event Driven architectures, which does not seems related to your question, but if it were, you can find all getting started docs here:
Installation: https://knative.dev/docs/install/yaml-install/eventing/install-eventing-with-yaml/
Getting started: https://knative.dev/docs/eventing/getting-started/
Maybe this could be helpfull to. Knative have this concepts of "Event Sources" that already exists and serves as links for events for certain producers and sinks.
In the case of the K8's events API there is one called the APIServer Source: https://knative.dev/docs/eventing/sources/apiserversource/
A bit more about Event Sources here: https://knative.dev/docs/eventing/sources/

Assign FQDN for Internal Services in a Private Kubernetes Cluster

I setup a private K8S cluster with RKE 1.2.2 and so my K8S version is 1.19. We have some internal services, and it is necessary to access each other using custom FQDN instead of simple service names. As I searched the web, the only solution I found is adding rewrite records for CoreDNS ConfigMap described in this REF. However, this solution results in manual configuration, and I want to define a record automatically during service setup. Is there any solution for this automation? Does CoreDNS have such an API to add or delete rewrite records?
Note1: I also tried to mount the CoreDNS's ConfigMap and update it via another pod, but the content is mounted read-only.
Note2: Someone proposed calling kubectl get cm -n kube-system coredns -o yaml | sed ... | kubectl apply .... However, I want to automate it during service setup or in a pod or in an initcontainer.
Note3: I wish there were something like hostAliases for services, something called serviceAliases for internal services (ClusterIP).
Currently, there is no ready solution for this.
Only thing comes to my mind is to use MutatingAdmissionWebhook. It would need catch moment, when new Kubernetes service was created and then modify ConfigMap for CoreDNS as it's described in CoreDNS documentation.
After that, you would need to reload CoreDNS configuration to apply new configuration from ConfigMap. To achieve that, you can use reload plugin for CoreDNS. More details about this plugin can be found here.
Instead of above you can consider using sidecarContainer for CoreDNS, which will send SIGUSR1 signal to CoreDNS conatiner.
Example of this method can be found in this Github thread.

Add Sidecar container to running pod(s)

I have helm deployment scripts for a vendor application which we are operating. For logging solution, I need to add a sidecar container for fluentbit to push the logs to aggregated log server (splunk in this case).
Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).
So far I have understood that sidecar container can be defined inside the same deployment script (deployment configuration).
Answering the question in the comments:
thanks #david. This has to be done before the deployment. I was wondering if I could attach a sidecar container to an already deployed (running) pod.
You can't attach the additional container to a running Pod. You can update (patch) the resource definition. This will force the resource to be recreated with new specification.
There is a github issue about this feature which was closed with the following comment:
After discussing the goals of SIG Node, the clear consensus is that the containers list in the pod spec should remain immutable. #27140 will be better addressed by kubernetes/community#649, which allows running an ephemeral debugging container in an existing pod. This will not be implemented.
-- Github.com: Kubernetes: Issues: Allow containers to be added to a running pod
Answering the part of the post:
Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).
Below I've included two methods to add a sidecar to a Deployment. Both of those methods will reload the Pods to match new specification:
Use $ kubectl patch
Edit the Helm Chart and use $ helm upgrade
In both cases, I encourage you to check how Kubernetes handles updates of its resources. You can read more by following below links:
Kubernetes.io: Docs: Tutorials: Kubernetes Basics: Update: Update
Medium.com: Platformer blog: Enable rolling updates in Kubernetes with zero downtime
Use $ kubectl patch
The way to completely avoid editing the Helm charts would be to use:
$ kubectl patch
This method will "patch" the existing Deployment/StatefulSet/Daemonset and add the sidecar. The downside of this method is that it's not automated like Helm and you would need to create a "patch" for every resource (each Deployment/Statefulset/Daemonset etc.). In case of any updates from other sources like Helm, this "patch" would be overridden.
Documentation about updating API objects in place:
Kubernetes.io: Docs: Tasks: Manage Kubernetes objects: Update api object kubectl patch
Edit the Helm Chart and use $ helm upgrade
This method will require editing the Helm charts. The changes made like adding a sidecar will persist through the updates. After making the changes you will need to use the $ helm upgrade RELEASE_NAME CHART.
You can read more about it here:
Helm.sh: Docs: Helm: Helm upgrade
A kubernetes ressource is immutable, as mention by dawid-kruk . Therefore modifing the pod description will cause the containers to restart.
You can modify the pod using the kubectl patch command, don't forget to reapply the. Patch as necessary.
Alternatively The two following options will inject the sidecar without having to modify/fork upstream chart or mangling deployed ressources.
#1 mutating admission controller
A mutating admission controller (webhook) can modify ressources see https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
You can use a generic framework like opa.
Or a specific webhook like fluentd-sidecar-injector (not tested)
#2 support arbitrary sidecar in helm
You could submit a feature request to the chart mainter to supooort arbitrary sidecar injection, like in Prometheus, see https://stackoverflow.com/a/62910122/1260896

What is the recommended alternative to kubectl '--generator' option?

One of the points in the kubectl best practices section in Kubernetes Docs state below:
Pin to a specific generator version, such as kubectl run
--generator=deployment/v1beta1
But then a little down in the doc, we get to learn that except for Pod, the use of --generator option is deprecated and that it would be removed in future versions.
Why is this being done? Doesn't generator make life easier in creating a template file for resource definition of deployment, service, and other resources? What alternative is the kubernetes team suggesting? This isn't there in the docs :(
kubectl create is the recommended alternative if you want to use more than just a pod (like deployment).
https://kubernetes.io/docs/reference/kubectl/conventions/#generators says:
Note: kubectl run --generator except for run-pod/v1 is deprecated in v1.12.
This pull request has the reason why generators (except run-pod/v1) were deprecated:
The direction is that we want to move away from kubectl run because it's over bloated and complicated for both users and developers. We want to mimic docker run with kubectl run so that it only creates a pod, and if you're interested in other resources kubectl create is the intended replacement.
For deployment you can try
kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
and
Note: kubectl run --generator except for run-pod/v1 is deprecated in v1.12.

How to access the Kubernetes API in Go and run kubectl commands

I want to access my Kubernetes cluster API in Go to run kubectl command to get available namespaces in my k8s cluster which is running on google cloud.
My sole purpose is to get namespaces available in my cluster by running kubectl command: kindly let me know if there is any alternative.
You can start with kubernetes/client-go, the Go client for Kubernetes, made for talking to a kubernetes cluster. (not through kubectl though: directly through the Kubernetes API)
It includes a NamespaceLister, which helps list Namespaces.
See "Building stuff with the Kubernetes API — Using Go" from Vladimir Vivien
Michael Hausenblas (Developer Advocate at Red Hat) proposes in the comments documentations with using-client-go.cloudnative.sh
A versioned collection of snippets showing how to use client-go.