I can create config map from a property file and use the same config files inside the POD. However, I don't want to use the configmap created in past and supplied with the helmchart. Rather in the helmchart values.yaml i want to provide a file name from where the config map will be created dynamically ..
Any suggestions/examples are welcome .
Thanks in advance -
Tutai
See if the approach described in kubernetes/charts issue 1310 works for you.
I suggest that we allow for overriding the name of the ConfigMap that gets mounted to the persistent volume.
That way, the parent chart could create, and even make templates for, these ConfigMaps.
For example values.yaml could have the following fields added:
## alertmanager ConfigMap entries
##
alertmanagerFiles:
# ConfigMap override where full-name is {{.Release.Name}}-{{.Values.alertmanagerFiles.configMapOverrideName}}
configMapOverrideName: ""
...
## Prometheus server ConfigMap entries
##
serverFiles:
# ConfigMap override where full-name is {{.Release.Name}}-{{.Values.serverFiles.configMapOverrideName}}
configMapOverrideName: ""
...
You can see the implementation of that issue in commit 2ea7764, as an example of override.
This differs from a file approach, where you create a new config map and replace the old one:
kubectl create configmap asetting --from-file=afile \
-o yaml --dry-run | kubectl replace -f -
See "Updating Secrets and ConfigMaps" as an example.
Related
When operating a k8s cluster with an admission controller that limits the allowed registries, it's desirable to check manifests to verify they only refer to such images.
Is there a well established and correct way to process a Kubernetes manifest stream such as Kustomize output and list all container images it references? Including all Deployments, StatefulSets, Jobs, CRDs that embed PodTemplate, etc?
I landed up writing my own, then realised this must be a solved problem. kustomize has the images: transformer to rewrite images, but it doesn't seem to be able to list the candidates the transformer inspects. Surely there's something?
For example, if I have the kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- "https://github.com/prometheus-operator/kube-prometheus"
I want to be able to run some getimages filter such that
kustomize build | getimages
returns the same list as this hacky shell pipeline example:
$ kustomize build|grep 'image:' | awk '$2 != "" { print $2}' | sort -u
grafana/grafana:8.4.6
jimmidyson/configmap-reload:v0.5.0
k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.4.2
k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.1
quay.io/brancz/kube-rbac-proxy:v0.12.0
quay.io/prometheus/alertmanager:v0.24.0
quay.io/prometheus/blackbox-exporter:v0.20.0
quay.io/prometheus/node-exporter:v1.3.1
quay.io/prometheus-operator/prometheus-operator:v0.55.1
quay.io/prometheus/prometheus:v2.34.0
... but in a robust and correct manner, unlike said hacky shell command.
I expected tools like kubeval or kustomize to be able to do this, but have drawn blanks in all searching.
Edit: There is kustomize cfg tree --image to list images, but:
It doesn't integrate with the kustomize image transformer's configuration so it won't recognise images in CRDs. You have to add additional --field specs for each one manually.
Its output format is painful if you just want the image names.
How to find the location of a kubernetes object's definition file.
I know the name of a kubernetes deployment and want to make some changes directly to its definition file instead of using 'kubernetes edit deployment '
The object definitions are stored internally in Kubernetes in replicated storage that's not directly accessible. If you do change an object definition, you would still need to trigger the rest of the Kubernetes update sequence when an object changes.
Typical practice is to keep the Kubernetes YAML files in source control. You can then edit these locally, and use kubectl apply -f to send them to the cluster. If you don't have them then you can run commands like kubectl get deployment depl-name -o yaml to get them out, and then check in the results to your source control repository.
If you really want to edit YAML definitions in an imperative, non-reproducible way, kubectl edit is the most direct thing you can do.
You could execute kubectl get deployment <deployment-name> -o yaml to get the deployment definition in a yaml format (or -o json to get in a json format), save that to a file, edit the file and apply the changes.
In a step-by-step guide would be:
Run kubectl get deployment deployment-name -o yaml > deployment-name.yaml
Edit and save the deployment-name.yaml using the editor of your preference
Run kubectl apply -f deployment-name.yaml to apply the changes
It's all stored in etcd
Nodes
Namespaces
ServiceAccounts
Roles and RoleBindings, ClusterRoles / ClusterRoleBindings
ConfigMaps
Secrets
Workloads: Deployments, DaemonSets, Pods, …
Cluster’s certificates
The resources within each apiVersion
The events that bring the cluster in the current state
Take a look at this blog post
I an changed my kubernetes cluster(v1.15.2) configmap, now I want to make my config apply to all of my deployment in some namespace. What is the best practice to do? I am tried to do like this:
kubectl rollout restart deployment soa-report-consumer
but my cluster has so many deployment, should I write shell script to complete this task, any simple way?
The usual fix for this is to use some automation from a tool like Kustomize or Helm so that the deployments automatically update when the config data changes.
ConfigMap Generator of kustomize can be used for this.
configMapGenerator contains a list of ConfigMaps to generate.
By default, generated ConfigMaps will have a hash appended to the name. The ConfigMap hash is appended after a nameSuffix, if one is specified. Changes to ConfigMap data will cause a ConfigMap with a new name to be generated, triggering a rolling update to Workloads referencing the ConfigMap.
https://github.com/kubernetes-sigs/kustomize/blob/master/examples/configGeneration.md
I have a tool working on K8s that uses four configuration files:
falco-config/falco.yaml
falco-config/falco_rules.local.yaml
falco-config/falco_rules.yaml
falco-config/k8s_audit_rules.yaml
At deployment time I create the config map for this tool using the command:
kubectl create configmap falco-config --from-file=..../falco-config/
It creates a ConfigMap with these four files. Now suppose I only want to update the falco_rules.yaml but I don't have (for different reasons) the other files. Which kubectl command can help me to do that? I searched for a solution on K8s doc and Stackoverflow with no luck.
Another question is, is there an example out there to do the same via K8s API in Javascript?
NOTE:
I have read this question:
Kubectl update configMap
but it doesn't address the modify via API and the fact that I need to only update one file while the whole configuration is composed by 4 files.
Unfortunately there is no way to update specific fields of the ConfigMap in one go. Assuming that the ConfigMap resource has been already created, you could work around this as follows:
Fetch the ConfigMap resource: kubectl get configmap <name> --export -o yaml > config.yaml to fetch the ConfigMap resource locally
Update the fields in config.yaml so that the values of falco_rules.yaml are properly injected. This can be done programmatically.
kubectl apply -f config.yaml to reconfigure the existing ConfigMap resource
I want to deploy Grafana using Kubernetes, but I don't know how to attach provisioned dashboards to the Pod. Storing them as key-value data in a configMap seems to me like a nightmare - example here https://github.com/do-community/doks-monitoring/blob/master/manifest/dashboards-configmap.yaml - in my case it would me much more JSON dashboards - thus the harsh opinion.
I didn't had an issue with configuring the Grafana settings, datasources and dashboard providers as configMaps since they are defined in single files, but the dashboards situation is a little bit more tricky for me.
All of my dashboards are stored in the repo under "/files/dashboards/", and I wondered how to make them available to the Pod, besides the way described earlier. Wondered about using the hostPath object for a sec, but didn't make sense for multi-node deployment on different hosts.
Maybe its easy - but I'm fairly new to Kubernetes and can't figure it out - so any help would be much appreciated. Thank you!
You can automatically generate a ConfigMap from a set fo files in a directory. Each file will be a key-value pair in the ConfigMap with the file name being the key and the file content being the value (like in your linked example but done automatically instead of manually).
Assuming that your dashboard files are stored as, for example:
files/dashboards/
├── k8s-cluster-rsrc-use.json
├── k8s-node-rsrc-use.json
└── k8s-resources-cluster.json
You can run the following command to directly create the ConfigMap in the cluster:
kubectl create configmap my-config --from-file=files/dashboards
If you prefer to only generate the YAML manifest for the ConfigMap, you can do:
kubectl create configmap my-config --from-file=files/dashboards --dry-run -o yaml >my-config.yaml
You could look into these options:
Use a persistent volume.
Store the JSON files for the dashboards in a code repo like git, file repository like nexus, or a plain web server, and use init container to get the files before the application (Grafana) container is started and put them on a volume shared between the init container and the application (Grafana) container. This example could be a good starting point.
Notice that this doesn't require a persistent volume. See in the example - it uses a volume of type emptyDir.