Do initializers (initializerConfiguration) work on k8s 1.10? - kubernetes

I tried (unsuccessfully) to set up an initializer admission controller on k8s 1.10, running in minikube. kubectl does not show 'initializerconfiguration' as a valid object type and attempting 'kubectl create -f init.yaml' with a file containing an initializerConfiguration object (similar to the exmaple found here: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#configure-initializers-on-the-fly) returns this:
no matches for kind "InitializerConfiguration" in version "admissionregistration.k8s.io/v1alpha1"
(I tried with /v1beta1 as well, because kubectl api-versions doesn't show admissionregistration.k8s.io/v1alpha1 but does have .../v1beta1; no luck with that, either).
"Initializers" is enabled in the --admission-control option for kube-apiserver and all possible APIs are also turned on by default in minikube - so it should have worked, according to the k8s documentation.

According to the document mentioned in question:
Enable initializers alpha feature
Initializers is an alpha feature, so it is disabled by default. To
turn it on, you need to:
Include “Initializers” in the --enable-admission-plugins flag when starting kube-apiserver. If you have multiple kube-apiserver
replicas, all should have the same flag setting.
Enable the dynamic admission controller registration API by adding admissionregistration.k8s.io/v1alpha1 to the --runtime-config flag
passed to kube-apiserver, e.g.
--runtime-config=admissionregistration.k8s.io/v1alpha1. Again, all
replicas should have the same flag setting.
NOTE: For those looking to use this on minikube, use this to pass runtime-config to the apiserver:
minikube start --vm-driver=none --extra-config=apiserver.runtime-config=admissionregistration.k8s.io/v1alpha1=true

Related

How can I tell if server-side apply is enabled in my Kubernetes cluster?

The page on server-side apply in the Kubernetes docs suggests that it can be enabled or disabled (e.g., the docs say, "If you have Server Side Apply enabled ...").
I have a GKE cluster and I would like to check if server-side apply is enabled. How can I do this?
You can try creating any object like namespace or so and try checking the YAML output using the command you will get an idea if SSA is enabled or not.
Command :
kubectl create ns test-ssa
Get the created namespace
kubectl get ns test-ssa -o yaml
If there is managedFields existing in output SSA is working.
Server-side-apply i think introduced around K8s version 1.14 and now it's in GA with k8s version 1.22. Wiht GKE i have noticed it's already been part of it alpha or beta.
If you are using the HELM on your GKE you might have noticed the Service Side Apply.

Add Sidecar container to running pod(s)

I have helm deployment scripts for a vendor application which we are operating. For logging solution, I need to add a sidecar container for fluentbit to push the logs to aggregated log server (splunk in this case).
Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).
So far I have understood that sidecar container can be defined inside the same deployment script (deployment configuration).
Answering the question in the comments:
thanks #david. This has to be done before the deployment. I was wondering if I could attach a sidecar container to an already deployed (running) pod.
You can't attach the additional container to a running Pod. You can update (patch) the resource definition. This will force the resource to be recreated with new specification.
There is a github issue about this feature which was closed with the following comment:
After discussing the goals of SIG Node, the clear consensus is that the containers list in the pod spec should remain immutable. #27140 will be better addressed by kubernetes/community#649, which allows running an ephemeral debugging container in an existing pod. This will not be implemented.
-- Github.com: Kubernetes: Issues: Allow containers to be added to a running pod
Answering the part of the post:
Now to define this sidecar container, I want to avoid changing vendor defined deployment scripts. Instead i want some alternative way to attach the sidecar container to the running pod(s).
Below I've included two methods to add a sidecar to a Deployment. Both of those methods will reload the Pods to match new specification:
Use $ kubectl patch
Edit the Helm Chart and use $ helm upgrade
In both cases, I encourage you to check how Kubernetes handles updates of its resources. You can read more by following below links:
Kubernetes.io: Docs: Tutorials: Kubernetes Basics: Update: Update
Medium.com: Platformer blog: Enable rolling updates in Kubernetes with zero downtime
Use $ kubectl patch
The way to completely avoid editing the Helm charts would be to use:
$ kubectl patch
This method will "patch" the existing Deployment/StatefulSet/Daemonset and add the sidecar. The downside of this method is that it's not automated like Helm and you would need to create a "patch" for every resource (each Deployment/Statefulset/Daemonset etc.). In case of any updates from other sources like Helm, this "patch" would be overridden.
Documentation about updating API objects in place:
Kubernetes.io: Docs: Tasks: Manage Kubernetes objects: Update api object kubectl patch
Edit the Helm Chart and use $ helm upgrade
This method will require editing the Helm charts. The changes made like adding a sidecar will persist through the updates. After making the changes you will need to use the $ helm upgrade RELEASE_NAME CHART.
You can read more about it here:
Helm.sh: Docs: Helm: Helm upgrade
A kubernetes ressource is immutable, as mention by dawid-kruk . Therefore modifing the pod description will cause the containers to restart.
You can modify the pod using the kubectl patch command, don't forget to reapply the. Patch as necessary.
Alternatively The two following options will inject the sidecar without having to modify/fork upstream chart or mangling deployed ressources.
#1 mutating admission controller
A mutating admission controller (webhook) can modify ressources see https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
You can use a generic framework like opa.
Or a specific webhook like fluentd-sidecar-injector (not tested)
#2 support arbitrary sidecar in helm
You could submit a feature request to the chart mainter to supooort arbitrary sidecar injection, like in Prometheus, see https://stackoverflow.com/a/62910122/1260896

Changing restricted fields of templates in kubernetes and openshift

I'm using the openshift playground. I deploy a sample application, and export the yaml for the pod.
While trying to edit some of the fields I ran across this message
Forbidden: unsafe sysctl "kernel.msgmax" is not allowed
Searching around the link https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#listing-all-sysctl-parameters describes how some parameters are labelled unsafe and cannot be changed but the safe ones can
But even the safe sysctls throw error,
spec: Forbidden: pod updates may not change fields other than spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations
Is it the playground environment that is limiting changes to kernel parameters? Would I need to have my own minikube installation to enable changing the unsafe sysctl parameters?
Apart from the minikube/kubelet alternatives given to edit/enable unsafe sysctls, is there a different way? What would be a good way to customize kernel parameters for a pod?
The safe sysctls throwing that error is expected behavior. What you need to do is delete the pod before applying the edited yaml to the cluster.You can also avoid this error if you use a deployment instead of a pod directly.
Please read thoroughly this documentation section. Everything is clearly explained there.
As to setting Unsafe Sysctls, you need to additionally enable them on node-level:
All safe sysctls are enabled by default.
All unsafe sysctls are disabled by default and must be allowed
manually by the cluster admin on a per-node basis. Pods with disabled
unsafe sysctls will be scheduled, but will fail to launch.
With the warning above in mind, the cluster admin can allow certain
unsafe sysctls for very special situations such as high-performance or real-time application tuning. Unsafe sysctls are enabled on a
node-by-node basis with a flag of the kubelet; for example:
kubelet --allowed-unsafe-sysctls \
'kernel.msg*,net.core.somaxconn' ...
For Minikube, this can be done via the extra-config flag:
minikube start --extra-config="kubelet.allowed-unsafe-sysctls=kernel.msg*,net.core.somaxconn"...
Only namespaced sysctls can be enabled this way.
As to...
But even the safe sysctls throw error,
spec: Forbidden: pod updates may not change fields other than spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations
This is completely different error message and it has nothing to do with restrictions related to changing sysctls in your Pod definition. Note that you cannot change majority of your Pod specification via kubectl edit apart from just a few exceptions listed in the message above. Specifically you cannot change them without recreating your Pod so in this case instead of editing it you can simply run:
kubectl get pod pod-name -o yaml > my-pod.yaml
Then you can edit your required Pod spec fields, and redeploy it:
kubectl apply -f my-pod.yaml
Alternatively you may edit your Deployment as #Arghya Sadhu already suggested in his answer. Deployment controller will recreate those Pods for you with updated specification.
Is it the playground environment that is limiting changes to kernel
parameters? Would I need to have my own minikube installation to
enable changing the unsafe sysctl parameters?
Not really. You can enable them on every node which is part of your cluster by re-configuring your kubelets. As to changing kubelet configuration, it might be done differently depending on your kubernetes installation. In case it was created with kubeadm you just need to edit the following file:
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
then run:
sudo systemctl daemon-reload
and restart your kubelet by running:
sudo systemctl restart kubelet.service
Apart from the minikube/kubelet alternatives given to edit/enable
unsafe sysctls, is there a different way? What would be a good way to
customize kernel parameters for a pod?
Answered above.
I hope it clarified your doubts about setting both safe and unsafe sysctls in a Kubernetes Cluster.

How to change kubelet configuration via kubeadm

I'm fairly new to Kubernetes and trying to wrap my head around how to manage ComponentConfigs in already running clusters.
For example:
Recently I initialized a kubeadm cluster in a test environment running Ubuntu. When I did that, I found CoreDNS to be in a CrashLoopBackoff which turned out to be the case because Ubuntu was configured to use systemd-resolved and so the resolv.conf had a loopback resolver configured. After reading the docs for coredns, I found out that a solution for that would be to change the resolvConf parameter for kubelet - either via commandline arguments or in the config.
So how would one do this properly in a kubeadm-managed cluster?
Reading [this page in the documentation][1] I didn't really get a clue, because it seems to be tailored to the case of initializing a new cluster or joining new nodes.
Of course, in this particular situation I could just use "Kubeadm reset" and initialize it again with a --config parameter but that doesn't seem to be the right solution for a running cluster.
So after digging a bit deeper I found several infos:
I could change the /var/lib/kubelet/kubeadm-flags.env on the node directly, but AFAICT this only makes sense for node-specific changes.
There is a ConfigMap in the kube-system namespace named kubelet-config-1.14. This seems promising for upcoming nodes joining the cluster to get the right configuration - but would changing that CM affect the already running Kubelet?
There is a marshalled version of the running config in /var/lib/config/kubelet.yaml that I could change, but AFAIU this would be overriden by kubelet itself periodically (?) or at least during a kubeadm upgrade.
There seems to be an option to specify a configmap in the node object, to let kubelet dynamically load the configuration from there, but given that there is already an existing configmap it seems more sensible to change that one.
I seemingly had success by some combination of changing aforementioned CM, running kubeadm upgrade something afterwards and rebooting the machine (since restarting the kubelet did not fix the CoreDNS issue ... but maybe I was to impatient).
So I am now asking:
What is the recommended way to carry out changes to the kubelet configuration (or any other configuration I could affect via kubeadm-config.yaml) that works and is upgrade-safe for cases where the configuration is not node-specific?
And if this involves running kubeadm ... config --config - how do I extract the existing Kubeadm-config in a way that I can feed it back to to kubeadm?
I am entirely happy with pointers to the right documentation, I just didn't find the right clues myself.
TIA
What you are looking for is well described in official documentation.
The basic workflow for configuring a Kubelet is as follows:
Write a YAML or JSON configuration file containing the Kubelet’s configuration.
Wrap this file in a ConfigMap and save it to the Kubernetes control plane.
Update the Kubelet’s corresponding Node object to use this ConfigMap.
In addition there is DynamicKubeletConfig Feature Gate is enabled by default starting from Kubernetes v1.11, but you need some additional steps to activate it. You need to remember about, that Kubelet’s --dynamic-config-dir flag must be set to a writable directory on the Node.

Is it possible to add/modify kubernetes container spec based on clusterwide setting

I have a kubernetes-based application that uses an operator to build and deploy containers in pods. Sometimes I'd like to run containers in privileged mode to enable performance tracing, but since I'm not deploying the pod/containers directly from a manifest, I cannot simply add privileged mode and the debugfs filesystem mount.
That leaves me to fork the operator code, change where it builds the container spec, and redeploy with the modified operator. Doable, but awkward.
So my question is, is it possible to impose additional attributes to be added to container specs based on some clusterwide setting, either before pods are deployed by the operator? Or to modify the container spec after deployment? I tried that with kubectl edit pod mypod, but that didn't work.
This is on a physical cluster installed with kubespray.
There are three things to consider:
Your operator can create a controller (e.g. Deployment) instead of Pod, which allows modifications in the Pod Spec area, thus triggering Deployment's rollout (see rolling update strategy).
Use MutatingAdmissionWebhook
so before creating the Pod, its manifest would be modified/overwritten on the fly.
More info regarding MutatingAdmissionWebhook can be found here and here.
A workaround solution in a form of modifying the supply spec -> swapping the pod-a.
More about this was discussed here.
Please let me know if any of the above helped.