Patch existing resource using helm - kubernetes-helm

Is it possible to have a helm template that gets applied as in kubectl patch?
This would mean that the values provided are merged into the existing resource, rather than overriding the complete resource.
As an example, if there was a resource living in the cluster like:
foo:
- bar
huu:
- har
I'd like to only update this partly by patching foo: [bar] with e.g. foo: [pear] on applying the chart without any knowledge regarding the rest of this resource.
Also if this was an antipattern I'd be very thankful to get any hints on how to achieve this without manually running kubectl patch.

Helm needs to know, that this resource is managed by Helm. So you have to adopt the resource by adding labels to it:
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/release-name: your-release
There's also a Helm plugin to adopt existing resources with helm: https://github.com/HamzaZo/helm-adopt

Related

Helmfile with additional resource without chart

I know this is maybe a weird question, but I want to ask if it's possible to also manage single resources (like f.e. a configmap/secret) without a seperate chart?
F.e. I try to install a nginx-ingress and would like to additionally apply a secret map which includes http-basic-authentication data.
I can just reference the nginx-ingress-repo directly in my helmfile, but do I really need to create a seperate helm chart to also apply the http-basic-secret?
I have many releases which need a single, additional resource (like a json configmap, a single secret) and it would be cumbersome to always need a seperate chart file for each release?
Thank you!
Sorry, Helmfile only manages entire Helm releases.
There are a couple of escape hatches you might be able to use. Helmfile hooks can run arbitrary shell commands on the host (as distinct from Helm hooks, which usually run Jobs in the cluster) and so you could in principle kubectl apply a file in a hook. Helmfile also has some integration with Kustomize and it might be possible to add resources this way. As you've noted you can also write local charts and put whatever YAML you need in those.
The occasional chart does support including either arbitrary extra resources or specific configuration content; the Bitnami MariaDB chart, to pick one, supports putting anything you want under an extraDeploy value. You could use this in combination with Helmfile values: to inject more resources
releases:
- name: mariadb
chart: bitnami/mariadb
values:
- extraDeploy:
- |-
apiVersion: v1
kind: ConfigMap
...

Is there any mechanism in kubernetes to automatically add annotation to new pods in a specific namespace?

I have a namespace where new short-lived pods (< 1 minute) are created constantly by Apache Airflow. I want that all those new pods are annotated with aws.amazon.com/cloudwatch-agent-ignore: true automatically so that no CloudWatch metrics (container insights) are created for those pods.
I know that I can achieve that from airflow side with pod mutation hook but for the sake of the argument let's say that I have no control over the configuration of that airflow instance.
I have seen MutatingAdmissionWebhook and it seem that could do the trick, but it seems that it's considerable effort to set up. So I'm looking for a more of the shelf solution, I want to know if there is some "standard" admission controller that can do this specific use case, without me having to deploy a web server and implement the api required by MutatingAdmissionWebhook.
Is there any way to add that annotation from kubernetes side at pod creation time? The annotation must be there "from the beginning", not added 5 seconds later, otherwise the cwagent might pick it between the pod creation and the annotation being added.
To clarify I am posting community Wiki answer.
You had to use aws.amazon.com/cloudwatch-agent-ignore: true annotation. This means the pod that has one, it will be ignored by amazon-cloudwatch-agent / cwagent.
Here is the excerpt of your solution how to add this annotation to Apache Airflow:
(...) In order to force Apache Airflow to add the
aws.amazon.com/cloudwatch-agent-ignore: true annotation to the task/worker pods and to the pods created by the KubernetesPodOperator you will need to add the following to your helm values.yaml (assuming that you are using the "official" helm chart for airflow 2.2.3):
airflowPodAnnotations:
aws.amazon.com/cloudwatch-agent-ignore: "true"
airflowLocalSettings: |-
def pod_mutation_hook(pod):
pod.metadata.annotations["aws.amazon.com/cloudwatch-agent-ignore"] = "true"
If you are not using the helm chart then you will need to change the pod_template_file yourself to add the annotation and you will also need to modify the airflow_local_settings.py to include the pod_mutation_hook.
Here is the link to your whole answer.
You can try this repo which is a mutating admission webhook that does this. To date there's no built-in k8s support to do automatic annotation for specific namespace.

How to update statefulset spec with terraform helm provider

I have a statefulset created using the terraform helm provider. I need to update the value of an attribute (serviceName) in the statefulset but I keep getting the following error
Error: failed to replace object: StatefulSet.apps "value" is invalid: spec: Forbidden:
updates to statefulset spec for fields other than 'replicas', 'template', and
'updateStrategy' are forbidden
Error is pretty descriptive and I understand that serviceName property can't be changed but then how do I update this property? I am totally fine with having a downtime and also letting helm delete all the old pods and create new ones.
I have tried setting force_update and recreate_pods properties to true in my helm chart with no luck. Manually deleting the old helm chart is not an option for me.
I maintain the Kustomization provider and unlike the helm integration into Terraform it tracks each individual Kubernetes resource in the Terraform state. Therefor, it will show changes to the actual Kubernetes resources in the plan. And, most important for your issue here, it will also generate destroy-and-recreate plans for cases where you have to change immutable fields.
It's a bit of a migration effort, but you can make it easier for you by using the helm template command to write YAML to disk and then point the Kustomize provider at the YAML.
As part of Kubestack, the Terraform framework for AKS, EKS and GKE I maintain, I also provide a convenience module.
You could use it like this, to have it apply the output of helm template:
module "example_stateful_set" {
source = "kbst.xyz/catalog/custom-manifests/kustomization"
version = "0.1.0"
configuration_base_key = "default"
configuration = {
default = {
resources = [
"${path.root}/path/to/helm/template/output.yaml"
]
}
}
}
Finally, you will have to import the existing Kubernetes resources into the Terraform state, so that the provider can start managing the existing resources.

Add Sidecar container to running pod(s)

I have helm deployment scripts for a vendor application which we are operating. For logging solution, I need to add a sidecar container for fluentbit to push the logs to aggregated log server (splunk in this case).
Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).
So far I have understood that sidecar container can be defined inside the same deployment script (deployment configuration).
Answering the question in the comments:
thanks #david. This has to be done before the deployment. I was wondering if I could attach a sidecar container to an already deployed (running) pod.
You can't attach the additional container to a running Pod. You can update (patch) the resource definition. This will force the resource to be recreated with new specification.
There is a github issue about this feature which was closed with the following comment:
After discussing the goals of SIG Node, the clear consensus is that the containers list in the pod spec should remain immutable. #27140 will be better addressed by kubernetes/community#649, which allows running an ephemeral debugging container in an existing pod. This will not be implemented.
-- Github.com: Kubernetes: Issues: Allow containers to be added to a running pod
Answering the part of the post:
Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).
Below I've included two methods to add a sidecar to a Deployment. Both of those methods will reload the Pods to match new specification:
Use $ kubectl patch
Edit the Helm Chart and use $ helm upgrade
In both cases, I encourage you to check how Kubernetes handles updates of its resources. You can read more by following below links:
Kubernetes.io: Docs: Tutorials: Kubernetes Basics: Update: Update
Medium.com: Platformer blog: Enable rolling updates in Kubernetes with zero downtime
Use $ kubectl patch
The way to completely avoid editing the Helm charts would be to use:
$ kubectl patch
This method will "patch" the existing Deployment/StatefulSet/Daemonset and add the sidecar. The downside of this method is that it's not automated like Helm and you would need to create a "patch" for every resource (each Deployment/Statefulset/Daemonset etc.). In case of any updates from other sources like Helm, this "patch" would be overridden.
Documentation about updating API objects in place:
Kubernetes.io: Docs: Tasks: Manage Kubernetes objects: Update api object kubectl patch
Edit the Helm Chart and use $ helm upgrade
This method will require editing the Helm charts. The changes made like adding a sidecar will persist through the updates. After making the changes you will need to use the $ helm upgrade RELEASE_NAME CHART.
You can read more about it here:
Helm.sh: Docs: Helm: Helm upgrade
A kubernetes ressource is immutable, as mention by dawid-kruk . Therefore modifing the pod description will cause the containers to restart.
You can modify the pod using the kubectl patch command, don't forget to reapply the. Patch as necessary.
Alternatively The two following options will inject the sidecar without having to modify/fork upstream chart or mangling deployed ressources.
#1 mutating admission controller
A mutating admission controller (webhook) can modify ressources see https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
You can use a generic framework like opa.
Or a specific webhook like fluentd-sidecar-injector (not tested)
#2 support arbitrary sidecar in helm
You could submit a feature request to the chart mainter to supooort arbitrary sidecar injection, like in Prometheus, see https://stackoverflow.com/a/62910122/1260896

How to bind kubernetes resource to helm release

If I run kubectl apply -f <some statefulset>.yaml separately, is there a way to bind the stateful set to a previous helm release? (eg by specifying some tags in the yaml file)
As far as I know - you cannot do it.
Yes, you can always create resources via templates before installing the Helm chart.
However, I have never seen a solution for your question.