Add apiserver extraArgs/extraVolumes in a live cluster - kubernetes

I have a Kubernetes cluster 1.17, and I want to add some extraArgs and extraVolumes (like in https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/) in the apiserver. Usually, I update the manifest file /etc/kubernetes/manifests/kube-apiserver.yaml to apply my new config, then I update the kubeadm-config ConfigMap to keep this new configuration for the next Kubernetes upgrade (because static pod manifests are re-generated from this ConfigMap when upgrading).
Is it possible to update only the kubeadm-config ConfigMap then apply the configuration with a command like kubeadm init phase control-plane apiserver ? What are the risks ?

That's the way to go to upgrade static pod definitions of control plane components, but instead of init command I guess you meant upgrade.
$ kubeadm upgrade command consults each time current cluster configuration from ConfigMap ($ kubectl -n kube-system get cm kubeadm-config -o yaml) before applying changes.
Talking about risks, you can try to envision them by studying output of kubeadm upgrade diff command, e.g.
kubeadm upgrade diff v1.20.4. More details in this documentation. You could also try to use --dry-run flag from this doc. It won't change any state, it will display the actions that would be performed.
As addition, you could also read about --experimental-patches from this docs

If you mean change the apiserver config in a live cluster,you can change /etc/kubernetes/manifest/kubeadm-apiserver.conf to apply.
But you must be careful becouse the old static pod will be killed before the new pod ready.

Related

Kubectl update imagePullPolicy

How do we update the imagePullPolicy alone for certain deployments using kubectl? The image tag has changed, however we don't require a restart. Need to update existing deployments with --image-pull-policy as IfNotPresent
Note: Don't have the complete YAML or JSON for the deployments, hence need to do it via kubectl
use
kubectl edit deployment <deployment_name> -n namespace
Then you will be able to edit imagePullPolicy

Is kubectl apply safe enough to update all pods no matter how they were created?

A pod can be created by Deployment or ReplicaSet or DaemonSet, if I am updating a pod's container specs, is it OK for me to simply modify the yaml file that created the pod? Would it be erroneous once I have done that?
Brief Question:
Is kubectl apply -f xxx.yml the silver bullet for all pod update?
...if I am updating a pod's container specs, is it OK for me to simply modify the yaml file that created the pod?
The fact that the pod spec is part of the controller spec (eg. deployment, daemonset), to update the container spec you naturally start with the controller spec. Also, a running pod is largely immutable, there isn't much you can change directly unless you do a replace - which is what the controller already doing.
you should not make changes to the pods directly, but update the spec.template.spec section of the deployment used to create the pods.
reason for this is that the deployment is the controller that manages the replicasets and therefore the pods that are created for your application. that means if you apply changes to the pods manifest directly, and something like a pod rescheduling/restart happens, the changes made to the pod will be lost because the replicaset will recreate the pod according to its own specification and not the specification of the last running pod.
you are safe to use kubectl apply to apply changes to existing resources but if you are unsure, you can always extract the current state of the deployment from kubernetes and pipe that output into a yaml file to create a backup:
kubectl get deploy/<name> --namespace <namespace> -o yaml > deploy.yaml
another option is to use the internal rollback mechanism of kubernetes to restore a previous revision of your deployment. see https://learnk8s.io/kubernetes-rollbacks for more infos on that.

Kubernetes pod Rollback and Restart

Deployment resource object is still not supported in our cluster and not enabled.
We are using Pod resource object Yaml file. something like below:
apiVersion: v1
kind: Pod
metadata:
name: sample-test
namespace: default
spec:
automountServiceAccountToken: false
containers:
I have explored patch and Put rest api for Pod(Kubectl patch and replace) - it will update to new image version and pod restarts.
I need help in below:
When the image version is same, it will not update and pod will not restart.
How can i acheive Pod restart, is there any API for this or any alternate
approach for this. Because My pod also refers configmap and secret. After i
make changes to secret, i want to restart pod so that it can take updated
value.
Suppose when patch applied with new container image and it fails status is failed, I want to rollback to previous version, How can i acheive this with standalone pod without using deployment. Is there any alternate approach.
Achieving solutions for your scenario, can be handled like this:
When the image version is same, it will not update and pod will not restart. How can i acheive Pod restart, is there any API for this or any alternate approach for this. Because My pod also refers configmap and secret. After i make changes to secret, i want to restart pod so that it can take updated value
Create a new secret/configmap each time and update the pod yaml to use the new configmap/secret rather than the old name.
Suppose when patch applied with new container image and it fails status is failed, I want to rollback to previous version, How can i acheive this with standalone pod without using deployment. Is there any alternate approach
Before you do a Pod update, get the current Pod yaml using kubectl like this,
kubectl get pod <pod-name> -o yaml -n <namespace>
After getting the yaml, generate the new pod yaml and apply it. In case of failure, clean up the new resources created(configmaps & secrets) and apply the older version of pod to achieve rollback

where I can find kubeadm-config.yaml on my kubernetes cluster

There is a k8s single master node, I need to back it up and restore it
I googled this topic and found a solution -
https://elastisys.com/2018/12/10/backup-kubernetes-how-and-why/
Everything looked easy; so,I followed the instruction and got a copy of the certificates and a snapshot of the etcd database.
But at last, I am not able to find kubeadm-config.yaml on my master server.
Where to find this file?
During kubeadm init, kubeadm uploads the ClusterConfiguration object to your cluster in a ConfigMap called kubeadm-config in the kube-system namespace. You can get it from the ConfigMap and take a backup
kubectl get cm kubeadm-config -n kube-system -o yaml > kubeadm-config.yaml
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/

k3s cleanup of HelmChart?

I have followed instructions from this blog post to set up a k3s cluster on a couple of raspberry pi 4:
I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.
From the k3s docs
It is also possible to deploy Helm charts. k3s supports a CRD
controller for installing charts. A YAML file specification can look
as following (example taken from
/var/lib/rancher/k3s/server/manifests/traefik.yaml):
So I have been starting up my k3s with the --no-deploy traefik option to manually add it with settings. So I therefore manually apply a yaml like this:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
dashboard:
enabled: true
domain: "traefik.k3s1.local"
But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try kubectl delete -f on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.
I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.
Is there a way to delete all the resources created by a chart like this without the helm cli (which I don't even have)?
Are you sure that kubectl delete -f is hanging?
I had the same issue as you and it seemed like kubectl delete -f was hanging, but it was really just taking a long time.
As far as I can tell, when you issue the kubectl delete -f a pod in the kube-system namespace with a name of helm-delete-* should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running kubectl -n kube-system get pods, find the one with kube-delete-<name of yaml>-<id>. Then use the pod name to look at the logs using kubectl -n kube-system logs kube-delete-<name of yaml>-<id>.
An example of what I did was:
kubectl delete -f jenkins.yaml # seems to hang
kubectl -n kube-system get pods # look at pods in kube-system namespace
kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs
I see two options here:
Use the --now flag to delete your yaml file with minimal delay.
Use --grace-period=0 --force flags to force delete the resource.
There are other options but you'll need Helm CLI for them.
Please let me know if that helped.