Trigger When A New Pod Is Created on Kubernetes - kubernetes

I have a question that i have encountered on my project. In my project, briefly when a user clicks to a button, a pod is created, does some operations and finally it is deleted. I should measure the pods running time and should decrease the duration from the credit of the user. I want to manage it externally. Is it possible to understand and manage when a new pod has been created and destroyed from outside of the pods?
Thanks

I think there are multiple approach to this problem and I can suggest one.
If you are working with single container then postStart and preStop hooks can be useful. Here is the official doc of container hooks.

The function I written is like following:
func WatchPods() {
clientset, err := utils.NewClient(true)//create k8s client
if err != nil {
//handle error
}
watchlist := cache.NewListWatchFromClient(
clientset.CoreV1().RESTClient(),
string(v1.ResourcePods),
"default",
fields.Everything(),
)
_, controller := cache.NewInformer(
watchlist,
&v1.Pod{},
0,
cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {//Calls when a new pod is created
pod := obj.(*v1.Pod)
//do whatever you want
}
},
DeleteFunc: func(obj interface{}) {//Calls when a new pod is deleted
pod := obj.(*v1.Pod)
//do whatever you want
},
},
)
stop := make(chan struct{})
defer close(stop)
controller.Run(stop)
}

Related

OPA/Rego: Ensure that every Service in Helm chart has exactly one matching Pod

I would like to check that every Service in a rendered Helm chart has exactly one matching Pod.
A Pod to service association exists when every entry specified in a Services spec.selector object is reflected in a Pods metadata.labels object (which can have additional keys).
The following policy is tested using Conftest by running conftest test --combine {YAML_FILE} and checks that every Service has at least one matching Pod. I'm completely unsure how to transform this so that it checks for exactly one matching Pod.
package main
import future.keywords.every
in_set(e, s) { s[e] }
get_pod(resource) := pod {
in_set(resource.kind, {"Deployment", "StatefulSet", "Job"})
pod := resource.spec.template
}
# ensure that every service has at least one matching pod
# TODO: ensure that every service has exactly one matching pod
deny_service_without_matching_pod[msg] {
service := input[_].contents
service.kind == "Service"
selector := object.get(service, ["spec", "selector"], {})
pods := { p | p := get_pod(input[_].contents) }
every pod in pods {
labels := object.get(pod, ["metadata", "labels"], {})
matches := { key | some key; labels[key] == selector[key] }
count(matches) != count(selector)
}
msg := sprintf("service %s has no matching pod", [service.metadata.name])
}
Marginal note: The get_pod function doesn't retrieve all PodTemplates that can possibly occur in a Helm chart. Other checks are in place to keep the Kubernetes API-surface of the Helm chart small - so in this case, Pods can only occur in Deployment, StatefulSet and Job.
Maybe there are rego experts here that can chime in and help. That would be very appreciated! 😀
Since there's no sample data provided, this is untested code. It should work though :)
package main
import future.keywords.in
pods := { pod |
resource := input[_].contents
resource.kind in {"Deployment", "StatefulSet", "Job"}
pod := resource.spec.template
}
services := { service |
service := input[_].contents
service.kind == "Service"
}
pods_matching_selector(selector) := { pod |
selector != {}
some pod in pods
labels := pod.metadata.labels
some key
labels[key] == selector[key]
}
deny_service_without_one_matching_pod[msg] {
some service in services
selector := object.get(service, ["spec", "selector"], {})
matching_pods := count(pods_matching_selector(selector))
matching_pods != 1
msg := sprintf(
"service %s has %d matching pods, must have exactly one",
[service.metadata.name, matching_pods]
)
}

How to trigger a rollout restart on deployment resource from controller-runtime

I have been using kubebuilder for writing custom controller, and aware of Get(), Update(), Delete() methods that it provides. But Now I am looking for a method which mimic the behaviour of kubectl rollout restart deployment. If there is no such direct method then I am looking for correct way to mimic the same.
type CustomReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
}
func (r *CustomReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
configMap := &v1.ConfigMap{}
err = r.Get(ctx, req.namespacedName, configMap)
if err != nil {
logger.Error(err, "Failed to GET configMap")
return ctrl.Result{}, err
}
Say in above code I read a deployment name from the configmap and rollout restart the same as follows:
val := configMap.Data["config.yml"]
config := Config{}
if err := yaml.Unmarshal([]byte(val), &config); err != nil {
logger.Error(err, "failed to unmarshal config data")
return ctrl.Result{}, err
}
// Need equivalent of following
// r.RolloutRestart(config.DeploymentName)
In all cases where you wish to replicate kubectl behavior, the answer is always to increase its verbosity and it'll show you exactly -- sometimes down to the wire payloads -- what it is doing.
For rollout restart, one will find that it just bumps an annotation on the Deployment/StatefulSet/whatever and that causes the outer object to be "different," and triggering a reconciliation run
You can squat on their annotation, or you can make up your own, or you can use a label change -- practically any "meaningless" change will do

Kubernetes Operator to read watch namespace from config map

I have an operator built via kube builder that reads from a "WATCH_NAMESPACE" env to understand which namespace to watch. This is how the current setup works.
namespaces := os.Getenv("WATCH_NAMESPACE")
if strings.Contains(namespaces, ",") {
setupLog.Info("Operator will listen to the specific namespaces: " + namespaces)
options.NewCache = cache.MultiNamespacedCacheBuilder(strings.Split(namespaces, ","))
} else {
log.Info("Operator will listen only one namespace or all namespace: " + namespaces)
options.Namespace = namespaces
}
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), options)
if err != nil {
log.Error(err, "unable to start manager")
os.Exit(1)
}
if err = (&dataplatformcontroller.DruidReconciler{
...
}
This works fine.
But the problem is that every time we need to add a new namespace to watch, we need to restart the operator. I believe the best option here is the watch a configmap and read from it every time.
But I am not sure how to proceed with this. Any suggestions or documentation or links can be helpful.

Why does client.Update(...) ignore non-primitive values?

I'm trying to modify the Spec of non-owned objects as part of the Reconcile of my Custom Resource, but it seems like it ignores any fields that are not primitives. I am using controller-runtime.
I figured since it was only working on primitives, maybe it's an issue related to DeepCopy. However, removing it did not solve the issue, and I read that any Updates on objects have to be on deep copies to avoid messing up the cache.
I also tried setting client.FieldOwner(...) since it says that that's required for Updates that are done server-side. I wasn't sure what to set it to, so I made it req.NamespacedName.String(). That did not work either.
Here is the Reconcile loop for my controller:
func (r *MyCustomObjectReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
// ...
var myCustomObject customv1.MyCustomObject
if err := r.Get(ctx, req.NamespacedName, &myCustomObject); err != nil {
log.Error(err, "unable to fetch ReleaseDefinition")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
// ...
deployList := &kappsv1.DeploymentList{}
labels := map[string]string{
"mylabel": myCustomObject.Name,
}
if err := r.List(ctx, deployList, client.MatchingLabels(labels)); err != nil {
log.Error(err, "unable to fetch Deployments")
return ctrl.Result{}, err
}
// make a deep copy to avoid messing up the cache (used by other controllers)
myCustomObjectSpec := myCustomObject.Spec.DeepCopy()
// the two fields of my CRD that affect the Deployments
port := myCustomObjectSpec.Port // type: *int32
customenv := myCustomObjectSpec.CustomEnv // type: map[string]string
for _, dep := range deployList.Items {
newDeploy := dep.DeepCopy() // already returns a pointer
// Do these things:
// 1. replace first container's containerPort with myCustomObjectSpec.Port
// 2. replace first container's Env with values from myCustomObjectSpec.CustomEnv
// 3. Update the Deployment
container := newDeploy.Spec.Template.Spec.Containers[0]
// 1. Replace container's port
container.Ports[0].ContainerPort = *port
envVars := make([]kcorev1.EnvVar, 0, len(customenv))
for key, val := range customenv {
envVars = append(envVars, kcorev1.EnvVar{
Name: key,
Value: val,
})
}
// 2. Replace container's Env variables
container.Env = envVars
// 3. Perform update for deployment (port works, env gets ignored)
if err := r.Update(ctx, newDeploy); err != nil {
log.Error(err, "unable to update deployment", "deployment", dep.Name)
return ctrl.Result{}, err
}
}
return ctrl.Result{}, nil
}
The Spec for my CRD looks like:
// MyCustomObjectSpec defines the desired state of MyCustomObject
type MyCustomObjectSpec struct {
// CustomEnv is a list of environment variables to set in the containers.
// +optional
CustomEnv map[string]string `json:"customEnv,omitempty"`
// Port is the port that the backend container is listening on.
// +optional
Port *int32 `json:"port,omitempty"`
}
I expected that when I kubectl apply a new CR with changes to the Port and CustomEnv fields, it would modify the deployments as described in Reconcile. However, only the Port is updated, and the changes to the container's Env are ignored.
The problem was that I needed a pointer to the Container I was modifying.
Doing this instead worked:
container := &newDeploy.Spec.Template.Spec.Containers[0]

Monitoring a kubernetes job

I have kubernetes jobs that takes variable amount of time to complete. Between 4 to 8 minutes. Is there any way i can know when a job have completed, rather than waiting for 8 minutes assuming worst case. I have a test case that does the following:
1) Submits the kubernetes job.
2) Waits for its completion.
3) Checks whether the job has had the expected affect.
Problem is that in my java test that submits the deployment job in the kubernetes, I am waiting for 8 minutes even if the job has taken less than that to complete, as i dont have a way to monitor the status of the job from the java test.
$ kubectl wait --for=condition=complete --timeout=600s job/myjob
<kube master>/apis/batch/v1/namespaces/default/jobs
endpoint lists status of the jobs. I have parsed this json and retrieved the name of the latest running job that starts with "deploy...".
Then we can hit
<kube master>/apis/batch/v1/namespaces/default/jobs/<job name retrieved above>
And monitor the status field value which is as below when the job succeeds
"status": {
"conditions": [
{
"type": "Complete",
"status": "True",
"lastProbeTime": "2016-09-22T13:59:03Z",
"lastTransitionTime": "2016-09-22T13:59:03Z"
}
],
"startTime": "2016-09-22T13:56:42Z",
"completionTime": "2016-09-22T13:59:03Z",
"succeeded": 1
}
So we keep polling this endpoint till it completes. Hope this helps someone.
You can use NewSharedInformer method to watch the jobs' statuses. Not sure how to write it in Java, here's the golang example to get your job list periodically:
type ClientImpl struct {
clients *kubernetes.Clientset
}
type JobListFunc func() ([]batchv1.Job, error)
var (
jobsSelector = labels.SelectorFromSet(labels.Set(map[string]string{"job_label": "my_label"})).String()
)
func (c *ClientImpl) NewJobSharedInformer(resyncPeriod time.Duration) JobListFunc {
var once sync.Once
var jobListFunc JobListFunc
once.Do(
func() {
restClient := c.clients.BatchV1().RESTClient()
optionsModifer := func(options *metav1.ListOptions) {
options.LabelSelector = jobsSelector
}
watchList := cache.NewFilteredListWatchFromClient(restClient, "jobs", metav1.NamespaceAll, optionsModifer)
informer := cache.NewSharedInformer(watchList, &batchv1.Job{}, resyncPeriod)
go informer.Run(context.Background().Done())
jobListFunc = JobListFunc(func() (jobs []batchv1.Job, err error) {
for _, c := range informer.GetStore().List() {
jobs = append(jobs, *(c.(*batchv1.Job)))
}
return jobs, nil
})
})
return jobListFunc
}
Then in your monitor you can check the status by ranging the job list:
func syncJobStatus() {
jobs, err := jobListFunc()
if err != nil {
log.Errorf("Failed to list jobs: %v", err)
return
}
// TODO: other code
for _, job := range jobs {
name := job.Name
// check status...
}
}
I found that the JobStatus does not get updated while polling using job.getStatus()
Even if the status changes while checking from the command prompt using kubectl.
To get around this, I reload the job handler:
client.extensions().jobs()
.inNamespace(myJob.getMetadata().getNamespace())
.withName(myJob.getMetadata().getName())
.get();
My loop to check the job status looks like this:
KubernetesClient client = new DefaultKubernetesClient(config);
Job myJob = client.extensions().jobs()
.load(new FileInputStream("/path/x.yaml"))
.create();
boolean jobActive = true;
while(jobActive){
myJob = client.extensions().jobs()
.inNamespace(myJob.getMetadata().getNamespace())
.withName(myJob.getMetadata().getName())
.get();
JobStatus myJobStatus = myJob.getStatus();
System.out.println("==================");
System.out.println(myJobStatus.toString());
if(myJob.getStatus().getActive()==null){
jobActive = false;
}
else {
System.out.println(myJob.getStatus().getActive());
System.out.println("Sleeping for a minute before polling again!!");
Thread.sleep(60000);
}
}
System.out.println(myJob.getStatus().toString());
Hope this helps
You did not mention what is actually checking the job completion, but instead of waiting blindly and hope for the best you should keep polling the job status inside a loop until it becomes "Completed".
Since you said Java; you can use kubernetes java bindings from fabric8 to start the job and add a watcher:
KubernetesClient k = ...
k.extensions().jobs().load(yaml).watch (new Watcher <Job>() {
#Override
public void onClose (KubernetesClientException e) {}
#Override
public void eventReceived (Action a, Job j) {
if(j.getStatus().getSucceeded()>0)
System.out.println("At least one job attempt succeeded");
if(j.getStatus().getFailed()>0)
System.out.println("At least one job attempt failed");
}
});
I don't know what kind of tasks are you talking about but let's assume you are running some pods
you can do
watch 'kubectl get pods | grep <name of the pod>'
or
kubectl get pods -w
It will not be the full name of course as most of the time the pods get random names if you are running nginx replica or deployment your pods will end up with something like nginx-1696122428-ftjvy so you will want to do
watch 'kubectl get pods | grep nginx'
You can replace the pods with whatever job you are doing i.e (rc,svc,deployments....)