Kubectl update ConfigMap with files - kubernetes

I have a tool working on K8s that uses four configuration files:
falco-config/falco.yaml
falco-config/falco_rules.local.yaml
falco-config/falco_rules.yaml
falco-config/k8s_audit_rules.yaml
At deployment time I create the config map for this tool using the command:
kubectl create configmap falco-config --from-file=..../falco-config/
It creates a ConfigMap with these four files. Now suppose I only want to update the falco_rules.yaml but I don't have (for different reasons) the other files. Which kubectl command can help me to do that? I searched for a solution on K8s doc and Stackoverflow with no luck.
Another question is, is there an example out there to do the same via K8s API in Javascript?
NOTE:
I have read this question:
Kubectl update configMap
but it doesn't address the modify via API and the fact that I need to only update one file while the whole configuration is composed by 4 files.

Unfortunately there is no way to update specific fields of the ConfigMap in one go. Assuming that the ConfigMap resource has been already created, you could work around this as follows:
Fetch the ConfigMap resource: kubectl get configmap <name> --export -o yaml > config.yaml to fetch the ConfigMap resource locally
Update the fields in config.yaml so that the values of falco_rules.yaml are properly injected. This can be done programmatically.
kubectl apply -f config.yaml to reconfigure the existing ConfigMap resource

Related

Location of a kubernetes objects definition file

How to find the location of a kubernetes object's definition file.
I know the name of a kubernetes deployment and want to make some changes directly to its definition file instead of using 'kubernetes edit deployment '
The object definitions are stored internally in Kubernetes in replicated storage that's not directly accessible. If you do change an object definition, you would still need to trigger the rest of the Kubernetes update sequence when an object changes.
Typical practice is to keep the Kubernetes YAML files in source control. You can then edit these locally, and use kubectl apply -f to send them to the cluster. If you don't have them then you can run commands like kubectl get deployment depl-name -o yaml to get them out, and then check in the results to your source control repository.
If you really want to edit YAML definitions in an imperative, non-reproducible way, kubectl edit is the most direct thing you can do.
You could execute kubectl get deployment <deployment-name> -o yaml to get the deployment definition in a yaml format (or -o json to get in a json format), save that to a file, edit the file and apply the changes.
In a step-by-step guide would be:
Run kubectl get deployment deployment-name -o yaml > deployment-name.yaml
Edit and save the deployment-name.yaml using the editor of your preference
Run kubectl apply -f deployment-name.yaml to apply the changes
It's all stored in etcd
Nodes
Namespaces
ServiceAccounts
Roles and RoleBindings, ClusterRoles / ClusterRoleBindings
ConfigMaps
Secrets
Workloads: Deployments, DaemonSets, Pods, …
Cluster’s certificates
The resources within each apiVersion
The events that bring the cluster in the current state
Take a look at this blog post

How to view the manifest file used to create a Kubenetes resource?

I have K8s deployed on an EC2 based cluster,
There is an application running in the deployment, and I am trying to figure out the manifest files that were used to create the resources,
There were deployment, service and ingress files used to create the App setup.
I tried the following command, but I'm not sure if it's the correct one as it's also returning a lot of unusual data like lastTransitionTime, lastUpdateTime and status-
kubectl get deployment -o yaml
What is the correct command to view the manifest yaml files of an existing deployed resource?
There is no specific way to do that. You should store your source files in source control like any other code. Think of it like decompiling, you can do it, but what you get back is not the same as what you put in. That said, check for the last-applied annotation, if you use kubectl apply that would have a JSON version of a more original-ish manifest, but again probably with some defaulted fields.
You can try using the --export flag, but it is deprecated and may not work perfectly.
kubectl get deployment -o yaml --export
Refer: https://github.com/kubernetes/kubernetes/pull/73787
KUBE_EDITOR="cat" kubectl edit secrets rook-ceph-mon -o yaml -n rook-ceph 2>/dev/null >user.yaml

is there any way to batch restart deployment to apply config change in kubernetes v1.15.2

I an changed my kubernetes cluster(v1.15.2) configmap, now I want to make my config apply to all of my deployment in some namespace. What is the best practice to do? I am tried to do like this:
kubectl rollout restart deployment soa-report-consumer
but my cluster has so many deployment, should I write shell script to complete this task, any simple way?
The usual fix for this is to use some automation from a tool like Kustomize or Helm so that the deployments automatically update when the config data changes.
ConfigMap Generator of kustomize can be used for this.
configMapGenerator contains a list of ConfigMaps to generate.
By default, generated ConfigMaps will have a hash appended to the name. The ConfigMap hash is appended after a nameSuffix, if one is specified. Changes to ConfigMap data will cause a ConfigMap with a new name to be generated, triggering a rolling update to Workloads referencing the ConfigMap.
https://github.com/kubernetes-sigs/kustomize/blob/master/examples/configGeneration.md

How to Create a Configmap dynamically from a file

I can create config map from a property file and use the same config files inside the POD. However, I don't want to use the configmap created in past and supplied with the helmchart. Rather in the helmchart values.yaml i want to provide a file name from where the config map will be created dynamically ..
Any suggestions/examples are welcome .
Thanks in advance -
Tutai
See if the approach described in kubernetes/charts issue 1310 works for you.
I suggest that we allow for overriding the name of the ConfigMap that gets mounted to the persistent volume.
That way, the parent chart could create, and even make templates for, these ConfigMaps.
For example values.yaml could have the following fields added:
## alertmanager ConfigMap entries
##
alertmanagerFiles:
# ConfigMap override where full-name is {{.Release.Name}}-{{.Values.alertmanagerFiles.configMapOverrideName}}
configMapOverrideName: ""
...
## Prometheus server ConfigMap entries
##
serverFiles:
# ConfigMap override where full-name is {{.Release.Name}}-{{.Values.serverFiles.configMapOverrideName}}
configMapOverrideName: ""
...
You can see the implementation of that issue in commit 2ea7764, as an example of override.
This differs from a file approach, where you create a new config map and replace the old one:
kubectl create configmap asetting --from-file=afile \
-o yaml --dry-run | kubectl replace -f -
See "Updating Secrets and ConfigMaps" as an example.

Kubernetes deployments: Editing the 'spec' of a pod's YAML file fails

The env element added in spec.containers of a pod using K8 dashboard's Edit doesn't get saved. Does anyone know what the problem is?
Is there any other way to add environment variables to pods/containers?
I get this error when doing the same by editing the file using nano:
# pods "EXAMPLE" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `containers[*].image` or `spec.activeDeadlineSeconds`
Thanks.
Not all fields can be updated. This fact is sometimes mentioned in the kubectl explain output for the object (and the error you got lists the fields that can be changed, so the others probably cannot).:
$ kubectl explain pod.spec.containers.env
RESOURCE: env <[]Object>
DESCRIPTION:
List of environment variables to set in the container. Cannot be updated.
EnvVar represents an environment variable present in a Container.
If you deploy your Pods using a Deployment object, then you can change the environment variables in that object with kubectl edit since the Deployment will roll out updated versions of the Pod(s) that have the variable changes and kill the older Pods that do not. Obviously, that method is not changing the Pod in place, but it is one way to get what you need.
Another option for you may be to use ConfigMaps. If you use the volume plugin method for mounting the ConfigMap and your application is written to be aware of changes to the volume and reload itself with new settings on change, it may be an option (or at least give you other ideas that may work for you).
We cannot edit env variables, resource limit, service account of a pod that is running live.
But definitely, we can edit/update image name, toleration and active deadline seconds,, etc.
However, the "deployment" can be easily edited because "pod" is a child template of deployment specification.
In order to "edit" the running pod with desired changes, the following approach can be used.
Extract the pod definition to a file, Make necessary changes, Delete the existing pod, and Create a new pod from the edited file:
kubectl get pod my-pod -o yaml > my-new-pod.yaml
vi my-new-pod.yaml
kubectl delete pod my-pod
kubectl create -f my-new-pod.yaml
Not sure about others but when I edited the pod YAML from google Kubernetes Engine workloads page, the same error came to me. But if I retry after some time it worked.
feels like some update was going on at the same time earlier, so I try to edit YAML fast and apply the changes and it worked.