Labelling AWX instance created with AWX-operator - ansible-awx

I am wondering if there is any way to label the AWX instance created via AWX-operator. Looking at the AWX-operator code, it seems to be applying only its own label and not providing any possibility for custom labels.
There's an option for labeling the AWX service in Kubernetes via service_label but nothing regarding labeling the deployment and pods with custom labels.

Related

Is there any mechanism in kubernetes to automatically add annotation to new pods in a specific namespace?

I have a namespace where new short-lived pods (< 1 minute) are created constantly by Apache Airflow. I want that all those new pods are annotated with aws.amazon.com/cloudwatch-agent-ignore: true automatically so that no CloudWatch metrics (container insights) are created for those pods.
I know that I can achieve that from airflow side with pod mutation hook but for the sake of the argument let's say that I have no control over the configuration of that airflow instance.
I have seen MutatingAdmissionWebhook and it seem that could do the trick, but it seems that it's considerable effort to set up. So I'm looking for a more of the shelf solution, I want to know if there is some "standard" admission controller that can do this specific use case, without me having to deploy a web server and implement the api required by MutatingAdmissionWebhook.
Is there any way to add that annotation from kubernetes side at pod creation time? The annotation must be there "from the beginning", not added 5 seconds later, otherwise the cwagent might pick it between the pod creation and the annotation being added.
To clarify I am posting community Wiki answer.
You had to use aws.amazon.com/cloudwatch-agent-ignore: true annotation. This means the pod that has one, it will be ignored by amazon-cloudwatch-agent / cwagent.
Here is the excerpt of your solution how to add this annotation to Apache Airflow:
(...) In order to force Apache Airflow to add the
aws.amazon.com/cloudwatch-agent-ignore: true annotation to the task/worker pods and to the pods created by the KubernetesPodOperator you will need to add the following to your helm values.yaml (assuming that you are using the "official" helm chart for airflow 2.2.3):
airflowPodAnnotations:
aws.amazon.com/cloudwatch-agent-ignore: "true"
airflowLocalSettings: |-
def pod_mutation_hook(pod):
pod.metadata.annotations["aws.amazon.com/cloudwatch-agent-ignore"] = "true"
If you are not using the helm chart then you will need to change the pod_template_file yourself to add the annotation and you will also need to modify the airflow_local_settings.py to include the pod_mutation_hook.
Here is the link to your whole answer.
You can try this repo which is a mutating admission webhook that does this. To date there's no built-in k8s support to do automatic annotation for specific namespace.

Scrape Kubernetes metadata labels using Prometheus

I have added some labels to kubernetes namespace metadata, now I want to scrape namespace as well as those labels using prometheus, Actually I am trying to create a grafana dashboard and want to categorise namespaces based on labels. I tried using kubernetes_sd_configs, I am able to get namespace but unable to get labels of those namespaces. Does anyone know of any way to scrape labels along with namespace.
In case somebody is also looking for answer, we can use kube-state-metrics
It exposes kube_namespace_labels metric which has labels as well as namespace.

EKS Node Group Terraform - Add label to specific node

I'm provisioning EKS with managed nodes through Terraform. No issues there, it's all working fine.
My problem is that I want to add a label to one of my nodes to use as a nodeSelector in one of my deployments. I have an app that is backed by an EBS persistent volume which obviously is only available in a single AZ, so I want my pod to schedule there.
I can add a label pretty easily with:
kubectl label nodes <my node> <key>=<value>
And actually this is fine, that is until you do something like update the node group to the next version. The labels don't persist, which makes sense as they are not managed by Amazon.
Is there a way, either through terraform or something else to set these labels and make them persist. I notice that the EKS provider for Terraform has a labels option, but it seems like that will add the label to all nodes in the Node Group, and that's not what I want. I've looked around, but can't find anything.
You may not need to add a label to a specific node to solve your problem. Amazon as a cloud provider adds some Kubernetes labels to each node in a managed node group. Example:
labels:
failure-domain.beta.kubernetes.io/region: us-east-1
failure-domain.beta.kubernetes.io/zone: us-east-1a
kubernetes.io/hostname: ip-10-10-10-10.ec2.internal...
kubernetes.io/os: linux
topology.ebs.csi.aws.com/zone: us-east-1a
topology.kubernetes.io/region: us-east-1
topology.kubernetes.io/zone: us-east-1a
The exact labels available to you will depend on the version of Kubernetes you are running. Try running kubectl get nodes -o json | jq '.items[].metadata.labels' to see the labels set on each node in your cluster.
I recommend using topology.kubernetes.io/zone to match the availability zone containing your EBS volume. According to the Kubernetes documentation, both nodes and persistent volumes should have this label populated by the cloud provider.
Hope this helps. Let me know if you still have questions.
You can easily achieve that with Terraform:
resource "aws_eks_node_group" "example" {
...
labels = {
label_key = "label_value"
}
}
Add a second node group (with the desired node info) and label that node group.

Having issue while creating custom dashboard in Grafana( data-source is Prometheus)

I have setup Prometheus and Grafana for monitoring my kubernetes cluster and everything works fine. Then I have created custom dashboard in Grafana for my application.The metrics available in Prometheus is as follows and i have added the same in grafana as metrics:
sum(irate(container_cpu_usage_seconds_total{namespace="test", pod_name="my-app-65c7d6576b-5pgjq", container_name!="POD"}[1m])) by (container_name)
The issue is, my application is running as pod in kubernetes,so when the pod is deleted or recreated, then the name of the pod will change and it will be different than the pod name specified in the above metrics "my-app-65c7d6576b-5pgjq". So the data for the above metrics will not work anymore. and I have to add new metrics again in Grafana. Please let me know How can I overcome this situation.
Answer was provided by manu thankachan:
I have done it. Made some change in the query as follow:
sum(irate(container_cpu_usage_seconds_total{namespace="test",
container_name="my-app", container_name!="POD"}[1m])) by
(container_name)
If pod is created directly(not a part of deployment) then only pod name is same as we mentioned.
If pod is part of Deployment the it will have unique string from replicaset and also ends with random 5 characters to maintain unique name.
So always try to use container_name label or if your Kubernetes version is > v1.16.0 then use container label

Spinnaker server group labelling

I am creating a server group and I want to add a label to the deployment. I don't find any option in the spinnaker UI to add one. Any help on this?
The current version of the Kubernetes cloud provider (v1) does not support configuring labels on Server Groups.
The new Kubernetes Provider (v2), which is manifest-based, allows you to configure labels. This version, however, is still in alpha.
Sources
https://github.com/spinnaker/spinnaker/issues/1624
https://www.spinnaker.io/reference/providers/kubernetes-v2/