Is there a way to share a configMap in kubernetes between namespaces? - kubernetes

We are using one namespace for the develop environment and one for the staging environment. Inside each one of this namespaces we have several configMaps and secrets but there are a lot of share variables between the two environments so we will like to have a common file for those.
Is there a way to have a base configMap into the default namespace and refer to it using something like:
- envFrom:
- configMapRef:
name: default.base-config-map
If this is not possible, is there no other way other than duplicate the variables through namespaces?

Kubernetes 1.13 and earlier
They cannot be shared, because they cannot be accessed from a pods outside of its namespace. Names of resources need to be unique within a namespace, but not across namespaces.
Workaround it is to copy it over.
Copy secrets between namespaces
kubectl get secret <secret-name> --namespace=<source-namespace> --export -o yaml \
| kubectl apply --namespace=<destination-namespace> -f -
Copy configmaps between namespaces
kubectl get configmap <configmap-name>  --namespace=<source-namespace> --export -o yaml \
| kubectl apply --namespace=<destination-namespace> -f -
Kubernetes 1.14+
The --export flag was deprecated in 1.14
Instead following command can be used:
kubectl get secret <secret-name> --namespace=<source-namespace>  -o yaml \
| sed 's/namespace: <from-namespace>/namespace: <to-namespace>/' \
| kubectl create -f -
If someone still see a need for the flag, there’s an export script written by #zoidbergwill.

Please use the following command to copy from one namespace to another
kubectl get configmap <configmap-name> -n <source-namespace> -o yaml | sed 's/namespace: <source-namespace>/namespace: <dest-namespace>/' | kubectl create -f -
kubectl get secret <secret-name> -n <source-namespace> -o yaml | sed 's/namespace: <source-namespace>/namespace: <dest-namespace>/' | kubectl create -f -

Related

How do you specific GKE resource requests for Argo CD?

I am trying to set up Argo CD on Google Kubernetes Engine Autopilot and each pod/container is defaulting to the default resource request (0.5 vCPU and 2 GB RAM per container). This is way more than the pods need and is going to be too expensive (13GB of memory reserved in my cluster just for Argo CD). I am following the Getting Started guide for Argo CD and am running the following command to add Argo CD to my cluster:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
How do I specify the resources for each pod when I am using someone else's yaml template? The only way I have found to set resource requests is with my own yaml file like this:
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
But I don't understand how to apply this type of configuration to Argo CD.
Thanks!
So right now you are just using kubectl with the manifest from github and you cannot edit it. What you need to do is
1 Download the file with wget
https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
2 Use an editor like nano or vim to edit the file with requests as
explained in my comments above.
3 Then use kubectl apply -f newfile.yaml
You can dump the yaml of argocd, then customize your resource request, and then apply the modified yaml.
$ kubectl get deployment -n argocd -o yaml > argocd_deployment.yaml
$ kubectl get sts -n argocd -o yaml > argocd_statefulset.yaml
$ # modify resource
$ vim argocd_deployment.yaml
$ vim argocd_statefulset.yaml
$ kubectl apply -f argocd_deployment.yaml
$ kubectl apply -f argocd_statefulset.yaml
Or modify deplopyment and statefulset directly by kubectl edit
$ kubectl edit deployment -n argocd
$ kubectl edit sts -n argocd

How do I redirect Ansible to use files in a role directory?

Salutations, I am deploying pods/applications to EKS via Ansible. My playbook runs a few kubectl apply -f commands in order to deploy EKS resources and all of the .yaml files are in that directory.
I would like to place these .yaml files that create each application in it's own ansible role/files directory in order to clean up the main ansible directory a bit (the .yaml files are becoming overwhelming and I only have two applications being deployed thus far).
The issue is this: When I move the .yaml files to it's respective /roles/files directory ansible still seems to look for the files in the main ansible directory instead of scanning the internal role directory.
How do I redirect Ansible to run the shell commands on .yamls in the role's file directory? Playbook below:
#
# Deploying Jenkins to AWS EKS
#
# Create Jenkins Namespace
- name: Create Jenkins Namespace & set it to default
shell: |
kubectl create namespace jenkins
kubectl config set-context --current --namespace=jenkins
# Create Jenkins Service Account
- name: Create Jenkins Service Account
shell: |
kubectl create serviceaccount jenkins-master -n jenkins
kubectl get secret $(kubectl get sa jenkins-master -n jenkins -o jsonpath={.secrets[0].name}) -n jenkins -o jsonpath={.data.'ca\.crt'} | base64 --decode
# Deploy Jenkins
- name: Deploy Jenkins Application
shell: |
kubectl apply -f jenkins-service.yaml
kubectl apply -f jenkins-vol.yaml
kubectl apply -f jenkins-role.yaml
kubectl apply -f jenkins-configmap.yaml
kubectl apply -f jenkins-deployment.yaml
Below is the role directory structure, Ansible doesn't check this location for the yaml files to run in the playbook above.
You could use the role_path variable, which contains the path to the currently executing role. You could write your tasks like:
- name: Deploy Jenkins Application
shell: |
kubectl apply -f {{ role_path }}/files/jenkins-service.yaml
kubectl apply -f {{ role_path }}/files/jenkins-vol.yaml
...
Alternately, a fileglob lookup might be easier:
- name: Deploy Jenkins Application
command: kubectl apply -f {{ item }}
loop: "{{ query('fileglob', '*.yaml') }}"
This would loop over all the *.yaml files in your role's files
directory.
You could consider replacing your use of kubectl with
the k8s module.
Lastly, rather than managing these resources using Ansible, you could
consider using kustomize, which I have found
to be easier to work with unless you're relying heavily on Ansible
templating.

Copy a configmap but with another name in the same namespace (kubernetes/kubectl)

I have a configmap my-config in a namespace and need to make a copy (part of some temporary experimentation) but with another name so I end up with :
my-config
my-config-copy
I can do this with:
kubectl get cm my-config -o yaml > my-config-copy.yaml
edit the name manually followed by:
kubectl create -f my-config-copy.yaml
But is there a way to do it automatically in one line?
I can get some of the way with:
kubectl get cm my-config --export -o yaml | kubectl apply -f -
but I am missing the part with the new name (since names are immutable I know this is not standard behavior).
Also preferably without using export since:
Flag --export has been deprecated, This flag is deprecated and will be removed in future.
Any suggestions?
You can achieve this by combining kubectl's patch and apply functions.
kubectl patch cm source-cm -p '{"metadata":{ "name":"target-cm"}}' --dry-run=client -o yaml | kubectl apply -f -
source-cm and target-cm are the config map names

kubernetes update secrets using imperative commands

I am creating kubernetes secrets using the below command
kubectl create secret generic test-secret --save-config --dry-run=client --from-literal=a=data1 --from-literal=a=data2 -o yaml | kubectl apply -f -
Now, I need to add new literals using kubectl imperative command how to do that??
say eg:
kubectl apply secret generic test-secret --from-literal=c=data3 -o yaml | kubectl apply -f -
but gave the below error
Error: unknown flag: --from-literal
See 'kubectl apply --help' for usage.
error: no objects passed to apply
Any quick help is appreciated
add new literals using kubectl imperative command
When working with imperative commands it typically means that you don't save the change in a place outside the cluster. You can edit a Secret in the cluster directly:
kubectl edit secret test-secret
But if you want to automate your "addition", then you most likely save your Secret another place before applying to the cluster. How to do this depends on how you manage Secrets. One way of doing it is by adding it to e.g. Vault and then have it automatically injected. When working in an automated way, it is easier to practice immutable Secrets, and create new ones instead of mutating - because you typically need to redeploy your app as well, to make sure it uses the new. Using Kustomize with secretGenerator might be a good option if you work with immutable Secrets.
You can use kubectl patch imeperative command
example
root#controlplane:~# kubectl patch secrets test-secret --type='json' -p='[{"op" : "replace" ,"path" : "/data/newkey" ,"value" : "bmV3VmFsCg=="}]'
secret/test-secret patched
root#controlplane:~# kubectl describe secrets test-secret
Name: test-secret
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
a: 5 bytes
b: 5 bytes
newkey: 7 bytes
you need to pass encoded value, to encode/decode the new value of key you can use below command
root#controlplane:~# echo "newValue" | base64
bmV3VmFsdWUK
root#controlplane:~# echo bmV3VmFsdWUK | base64 -d
newValue
another option is to use kubectl edit but if you automate your code you can't use edit option
kubectl edit secret test-secret

Kubectl update configMap

I am using the following command to create a configMap.
kubectl create configmap test --from-file=./application.properties --from-file=./mongo.properties --from-file=./logback.xml
Now, I have modified a value for a key from mongo.properties which i need to update in kubernetes.
Option1 :-
kubectl edit test
Here, it opens the entire file. But, I want to just update mongo.properties and hence want to see only the mongo.properties. Is there any other way?
Note :- I dont want to have mongo.properties in a separate configMap.
Thanks
Now you can. Just throw: kubectl edit configmap <name of the configmap> on your command line. Then you can edit your configuration.
Another option is actually you can use this command:
kubectl create configmap some-config \
--from-file=some-key=some-config.yaml \
-n some-namespace \
-o yaml \
--dry-run | kubectl apply -f -
Refer to Github issue: Support updating config map and secret with --from-file
kubectl edit configmap -n <namespace> <configMapName> -o yaml
This opens up a vim editor with the configmap in yaml format. Now simply edit it and save it.
Here's a neat way to do an in-place update from a script.
The idea is;
export the configmap to YAML (kubectl get cm -o yaml)
use sed to do a command-line replace of an old value with a new value (sed "s|from|to")
push it back to the cluster using kubectl apply
In this worked example, I'm updating a log level variable from 'info' level logging to 'warn' level logging.
So, step 1, read the current config;
$ kubectl get cm common-config -o yaml
apiVersion: v1
data:
CR_COMMON_LOG_LEVEL: info
kind: ConfigMap
Step 2, you modify it locally with a regular expression search-and-replace, using sed:
$ kubectl get cm common-config -o yaml | \
sed -e 's|CR_COMMON_LOG_LEVEL: info|CR_COMMON_LOG_LEVEL: warn|'
apiVersion: v1
data:
CR_COMMON_LOG_LEVEL: warn
kind: ConfigMap
You can see the value has changed. Let's push it back up to the cluster;
Step 3; use kubectl apply -f -, which tells kubectl to read from stdin and apply it to the cluster;
$ kubectl get cm common-config -o yaml | \
sed -e 's|CR_COMMON_LOG_LEVEL: info|CR_COMMON_LOG_LEVEL: warn|' | \
kubectl apply -f -
configmap/common-config configured
No, you can't.
Replace in kubernetes will simply replace everything in that configmap. You can't just update one file or one single property in it.
However, if you check with the client Api, you will find if you create a configmap with lots of files. Then, those files will be stored as a HashMap, where key is file name by default, value is the file content encoded as a string. So you can write your own function based on existing key-value pair in HashMap.
This is what I found so far, if you find there is already existing method to deal with this issue, please let me know :)
FYI, if you want to update just one or few properties, it is possible if you use patch. However, it is a little bit hard to implement.
this and this may help
Here is how you can add/modify/remove files in a configmap with some help from jq:
export configmap to a JSON file:
CM_FILE=$(mktemp -d)/config-map.json
oc get cm <configmap name> -o json > $CM_FILE
DATA_FILES_DIR=$(mktemp -d)
files=$(cat $CM_FILE | jq '.data' | jq -r 'keys[]')
for k in $files; do
name=".data[\"$k\"]"
cat $CM_FILE | jq -r $name > $DATA_FILES_DIR/$k;
done
add/modify a file:
echo '<paste file contents here>' > $DATA_FILES_DIR/<file name>.conf
remove a file:
rm <file name>.conf
when done, update the configmap:
kubectl create configmap <configmap name> --from-file $DATA_FILES_DIR -o yaml --dry-run | kubectl apply -f -
delete temporary files and folders:
rm -rf CM_FILE
rm -rf DATA_FILES_DIR
Here is a complete shell script to add new file to configmap (or replace existing one) based on #Bruce S. answer https://stackoverflow.com/a/54876249/2862663
#!/bin/bash
# Requires jq to be installed
if [ -z "$1" ]
then
echo "usage: update-config-map.sh <config map name> <config file to add>"
return
fi
if [ -z "$2" ]
then
echo "usage: update-config-map.sh <config map name> <config file to add>"
return
fi
CM_FILE=$(mktemp -d)/config-map.json
kubectl get cm $1 -o json > $CM_FILE
DATA_FILES_DIR=$(mktemp -d)
files=$(cat $CM_FILE | jq '.data' | jq -r 'keys[]')
for k in $files; do
name=".data[\"$k\"]"
cat $CM_FILE | jq -r $name > $DATA_FILES_DIR/$k;
done
echo cunfigmap: $CM_FILE tempdir: $DATA_FILES_DIR
echo will add file $2 to config
cp $2 $DATA_FILES_DIR
kubectl create configmap $1 --from-file $DATA_FILES_DIR -o yaml --dry-run | kubectl apply -f -
echo Done
echo removing temp dirs
rm -rf $CM_FILE
rm -rf $DATA_FILES_DIR
Suggestion
I would highly consider using a CLI editor like k9s (which is more like a K8S CLI managment tool).
As you can see below (ignore all white placeholders), when your cluster's context is set on terminal you just type k9s and you will reach a nice terminal where you can inspect all cluster resources.
Just type ":" and enter the resource name (configmaps in our case) which will appear in the middle of screen (green rectangle).
Then you can choose the relevant configmap with the up and down arrows and type e to edit it (see green arrow).
For all Configmaps in all namespaces you choose 0, for a specific namespace you choose the number from the upper left menu - for example 1 for kube-system:
I managed to update a setting ("large-client-header-buffers") in the nginx pod's /etc/nginx/nginx.conf via configmap. Here are the steps I have followed..
Find the configmap name in the nginx ingress controller pod describition
kubectl -n utility describe pods/test-nginx-ingress-controller-584dd58494-d8fqr |grep configmap
--configmap=test-namespace/test-nginx-ingress-controller
Note: In my case, the namespace is "test-namespace" and the configmap name is "test-nginx-ingress-controller"
Create a configmap yaml
cat << EOF > test-nginx-ingress-controller-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: test-nginx-ingress-controller
namespace: test-namespace
data:
large-client-header-buffers: "4 16k"
EOF
Note: Please replace the namespace and configmap name as per finding in the step 1
Deploy the configmap yaml
kubectl apply -f test-nginx-ingress-controller-configmap.yaml
Then you will see the change is updated to nginx controller pod after mins
i.g.
kubectl -n test-namespace exec -it test-nginx-ingress-controller-584dd58494-d8fqr -- cat /etc/nginx/nginx.conf|grep large
large_client_header_buffers 4 16k;
Thanks to the sharing by NeverEndingQueue in How to use ConfigMap configuration with Helm NginX Ingress controller - Kubernetes