Can I set a default namespace in Kubernetes? - kubernetes

Can I set the default namespace? That is:
$ kubectl get pods -n NAMESPACE
It saves me having to type it in each time especially when I'm on the one namespace for most of the day.

Yes, you can set the namespace as per the docs like so:
$ kubectl config set-context --current --namespace=NAMESPACE
Alternatively, you can use kubectx for this.

You can also use a temporary linux alias:
alias k='kubectl -n kube-system '
Then use it like
k get pods
That's it ;)

I used to use the aliases shown below and set the variable N to the namespace to use.
# Set N=-nNamespace if N isn't set then no harm, no namespace will be used
alias k='kubectl $N'
alias kg='kubectl get $N'
alias ka='kubectl apply $N'
alias kl='kubectl logs $N'
To switch to the my-apps namespace; I'd use:
N=-nmy-apps
After this the commands:
kg pods
actually runs kubectl get -nmy-apps pods.
NOTE: If the bash variable N is not set, the command still works and runs as kubectl would by default.
To override the namespace set in N variable simply add the --namespace option like-nAnotherNamespace and the last namespace defined will be used.
Of course to more permanently (in the current shell) switch, I'd simply set the N variable as shown:
N=-nAnotherNamespace
kg pods
While the above works, I learned about kubens (bundled with kubectx, See github) which works more permanently because it updates my $HOME/.kube/config file with a line that specifies the namespace to use for the current k8s cluster I'm using (dev in the example below)
contexts:
- context:
cluster: dev
namespace: AnotherNamesapce <<< THIS LINE IS ADDED by kubens
user: user1
name: dev
current-context: dev
But all kubeens does is what is already built into kubectl using:
kubectl config set-context --current --namespace=AnotherNamespace
So really a simple alias that is easier to type works just as well, so I picked ksn for (kubectl set namespace).
function ksn(){
kubectl config set-context --current --namespace=$#
}
So now to switch context, I'm just using what is built into kubectl!
To switch to the namespace AnotherNamespace, I use:
ksn AnotherNamespace
Tada! The simplest "built in" solution.
Summary
For bash users, add the following to your $HOME/.bashrc file.
function ksn(){
if [ "$1" = "" ]
then
kubectl config view -v6 2>&1 | grep 'Config loaded from file:' | sed -e 's/.*from file: /Config file:/'
echo Current context: $(kubectl config current-context)
echo Default namespace: $(kubectl config view --minify | grep namespace: | sed 's/.*namespace: *//')
elif [ "$1" = "--unset" ]
then
kubectl config set-context --current --namespace=
else
kubectl config set-context --current --namespace=$1
fi
}
This lets you set a namespace, see what your namespace is or remove a default namespace (using --unset). See three commands below:
# Set namespace
ksn AnotherNamespace
# Display the selected namespace
ksn
Config file: /home/user/.kube/config
Current context: dev
Default namespace: AnotherNamespace
# Unset/remove a default namespace
ksn --unset
See also: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ for the command to view the current namespace:

Related

How do i directly know the root parent workload that a Pod belongs to

Problem Statement:
I have a Pod which belongs to a workload, now I want to know the workload that initiated that Pod. One way of doing it right now is going through the ownerReference and then going up the chain recursively finding for the root parent workload that initiated the pod.
Is there a way I can directly know which root parent workload initiated the Pod?
First, please remember that pods created by specific workload have this workload's name in the pod name. For example, pods defined in deployments have following pod naming convention:
<replicaset-name>-<some-string>
and replica set name is:
<deployment-name>-<some-string>
So for example:
Pod name: nginx-66b6c48dd5-84rml
Replica set name: nginx-66b6c48dd5
Deployment name: nginx
So the first part of the name which doesn't seem to be some random letters / number is the root workload name.
Only pods defined in StatefulSet have ordinal indexes, as follows:
<statefulset-name>-<ordinal index>
For example:
Pod name: web-0
StafeulSet name: web
Of course, based on workload name we are not able to know what kind of workload it is. Check the second part of my answer below.
Well, not taking into account the pod's name, it seems that your thinking is correct, the only way to find a "root" workload is to go through the chain recursively and find the next "parents" workloads.
When you run the command kubectl get pod {pod-name} -o json (to get all information about pod) there is only information about above level (as you said in case of pod defined in deployment, in pod information there is only information about replica set).
I wrote a small bash script that recursively checks every workload's ownerReferences until it finds "root" workload (the root workload does not have ownerRefernces). It requires you to have jq utility installed on your system. Check this:
#!/bin/bash
function get_root_owner_reference {
# Set kind, name and namespace
kind=$1
name=$2
namespace=$3
# Get ownerReferences
owner_references=$(kubectl get $kind $name -o json -n $namespace | jq -r 'try (.metadata.ownerReferences[])')
# If ownerReferences does not exists assume that it is root workload; if exists run get_root_owner_reference function
if [[ -z "$owner_references" ]]; then
resource_json=$(kubectl get $kind $name -o json -n $namespace)
echo "Kind: $(echo $resource_json | jq -r '.kind')"
echo "Name: $(echo $resource_json | jq -r '.metadata.name')"
else
get_root_owner_reference $(echo $owner_references | jq -r '.kind') $(echo $owner_references | jq -r '.name') $namespace
fi
}
# Get namespace if set
if [[ -z $3 ]]; then
namespace="default"
else
namespace=$3
fi
get_root_owner_reference $1 $2 $namespace
You need to provide two arguments - resource and name of the resource. Namespace name is optional (if not given it will use Kubernetes default namespace).
Examples:
Pod defined in deployment:
user#cloudshell:~/ownerRefernce$ ./get_root_owner_reference.sh pod nginx-66b6c48dd5-84rml
Kind: Deployment
Name: nginx
Pod created from CronJob:
user#cloudshell:~/ownerRefernce$ ./get_root_owner_reference.sh pod hello-27247072-mv4l9
Kind: CronJob
Name: hello
Pod created straight from pod definition:
user#cloudshell:~/ownerRefernce$ ./get_root_owner_reference.sh pod hostipc-exec-pod
Kind: Pod
Name: hostipc-exec-pod
Pod from other namespace:
user#cloudshell:~/ownerRefernce$ ./get_root_owner_reference.sh pod kube-dns-679799b55c-7pzr7 kube-system
Kind: Deployment
Name: kube-dns

Copy a configmap but with another name in the same namespace (kubernetes/kubectl)

I have a configmap my-config in a namespace and need to make a copy (part of some temporary experimentation) but with another name so I end up with :
my-config
my-config-copy
I can do this with:
kubectl get cm my-config -o yaml > my-config-copy.yaml
edit the name manually followed by:
kubectl create -f my-config-copy.yaml
But is there a way to do it automatically in one line?
I can get some of the way with:
kubectl get cm my-config --export -o yaml | kubectl apply -f -
but I am missing the part with the new name (since names are immutable I know this is not standard behavior).
Also preferably without using export since:
Flag --export has been deprecated, This flag is deprecated and will be removed in future.
Any suggestions?
You can achieve this by combining kubectl's patch and apply functions.
kubectl patch cm source-cm -p '{"metadata":{ "name":"target-cm"}}' --dry-run=client -o yaml | kubectl apply -f -
source-cm and target-cm are the config map names

How to remove a label from a kubernetes object just with "kubectl apply -f file.yaml"?

I'm playing around with GitOps and ArgoCD in Redhat Openshift. My goal is to switch a worker node to an infra node.
I want to do this with descriptive YAML Files, and NOT manually by using the command line (that's easy with kubectl label node ...)
In order to do make the node an infra node, I want to add a label "infra" and take the label "worker" from it. Before, the object looks like this (irrelevant labels omitted):
apiVersion: v1
kind: Node
metadata:
labels:
node-role.kubernetes.io/infra: ""
name: node6.example.com
spec: {}
After applying a YAML File, it's supposed to look like that:
apiVersion: v1
kind: Node
metadata:
labels:
node-role.kubernetes.io/worker: ""
name: node6.example.com
spec: {}
If I put the latter config in a file, and do "kubectl apply -f ", the node has both infra and worker labels. So adding a label or changing the value of a label is easy, but is there a way to remove a label in an objects metadata by applying a YAML file ?
you can delete the label with
kubectl label node node6.example.com node-role.kubernetes.io/infra-
than you can run the kubectl apply again with the new label.
You will be up and running.
I would say it's not possible to do with kubectl apply, at least I tried and couldn't find any informations about that.
As #Petr Kotas mentioned you can always use
kubectl label node node6.example.com node-role.kubernetes.io/infra-
But I see you're looking for something else
I want to do this with descriptive YAML Files, and NOT manually by using the command line (that's easy with kubectl label node ...)
So maybe the answer could be to use API clients, for example python? I have found this example here, made by #Prafull Ladha
As already mentioned, correct kubectl example to delete label, but there is no mention of removing labels using API clients. if you want to remove label using the API, then you need to provide a new body with the labelname: None and then patch that body to the node or pod. I am using the kubernetes python client API for example purpose
from pprint import pprint
from kubernetes import client, config
config.load_kube_config()
client.configuration.debug = True
api_instance = client.CoreV1Api()
body = {
"metadata": {
"labels": {
"label-name": None}
}
}
api_response = api_instance.patch_node("minikube", body)
print(api_response)
Try setting the worker label to false:
node-role.kubernetes.io/worker: "false"
Worked for me on OpenShift 4.4.
Edit:
This doesn't work. What happened was:
Applied YML file containing node-role.kubernetes.io/worker: "false"
Automated process ran deleting the node-role.kubernetes.io/worker label from the node (due to it not being specified in the YML it would automatically apply)
What's funny is that the automated process would not delete the label if it was empty instead of set to false.
I've pretty successfully changed a node label in my Kubernetes cluster (created using kubeadm) using kubectl replace and kubectl apply.
Required: If your node configuration was changed manually using imperative command like kubectl label it's required to fix last-applied-configuration annotation using the following command (replace node2 with your node name):
kubectl get node node2 -o yaml | kubectl apply -f -
Note: It works in the same way with all types of Kubernetes objects (with slightly different consequences. Always check the results).
Note2: --export argument for kubectl get is deprecated, and it works well without it, but if you use it the last-applied-configuration annotation appears to be much shorter and easier to read.
Without applying existing configuration, the next kubectl apply command will ignore all fields that are not present in the last-applied-configuration annotation.
The following example illustrate that behavior:
kubectl get node node2 -o yaml | grep node-role
{"apiVersion":"v1","kind":"Node","metadata":{"annotations":{"flannel.alpha.coreos.com/backend-data":"{\"VtepMAC\":\"46:c6:d1:f0:6c:0a\"}","flannel.alpha.coreos.com/backend-type":"vxlan","flannel.alpha.coreos.com/kube-subnet-manager":"true","flannel.alpha.coreos.com/public-ip":"10.156.0.11","kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"creationTimestamp":null,
"labels":{
"beta.kubernetes.io/arch":"amd64",
"beta.kubernetes.io/os":"linux",
"kubernetes.io/arch":"amd64",
"kubernetes.io/hostname":"node2",
"kubernetes.io/os":"linux",
"node-role.kubernetes.io/worker":""}, # <--- important line: only worker label is present
"name":"node2","selfLink":"/api/v1/nodes/node2"},"spec":{"podCIDR":"10.244.2.0/24"},"status":{"daemonEndpoints":{"kubeletEndpoint":{"Port":0}},"nodeInfo":{"architecture":"","bootID":"","containerRuntimeVersion":"","kernelVersion":"","kubeProxyVersion":"","kubeletVersion":"","machineID":"","operatingSystem":"","osImage":"","systemUUID":""}}}
node-role.kubernetes.io/santa: ""
node-role.kubernetes.io/worker: ""
Let's check what happened with node-role.kubernetes.io/santa label if I try to replace the worker with infra and remove santa, ( worker is present in the annotation):
# kubectl diff is used to comare the current online configuration, and the configuration as it would be if applied
kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/worker: ""#node-role.kubernetes.io/infra: ""#' | sed 's#node-role.kubernetes.io/santa: ""##'| kubectl diff -f -
diff -u -N /tmp/LIVE-380689040/v1.Node..node2 /tmp/MERGED-682760879/v1.Node..node2
--- /tmp/LIVE-380689040/v1.Node..node2 2020-04-08 17:20:18.108809972 +0000
+++ /tmp/MERGED-682760879/v1.Node..node2 2020-04-08 17:20:18.120809972 +0000
## -18,8 +18,8 ##
kubernetes.io/arch: amd64
kubernetes.io/hostname: node2
kubernetes.io/os: linux
+ node-role.kubernetes.io/infra: "" # <-- created as desired
node-role.kubernetes.io/santa: "" # <-- ignored, because the label isn't present in the last-applied-configuration annotation
- node-role.kubernetes.io/worker: "" # <-- removed as desired
name: node2
resourceVersion: "60973814"
selfLink: /api/v1/nodes/node2
exit status 1
After fixing annotation (by running kubectl get node node2 -o yaml | kubectl apply -f - ), kubectl apply works pretty well replacing and removing labels:
kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/worker: ""#node-role.kubernetes.io/infra: ""#' | sed 's#node-role.kubernetes.io/santa: ""##'| kubectl diff -f -
diff -u -N /tmp/LIVE-107488917/v1.Node..node2 /tmp/MERGED-924858096/v1.Node..node2
--- /tmp/LIVE-107488917/v1.Node..node2 2020-04-08 18:01:55.776699954 +0000
+++ /tmp/MERGED-924858096/v1.Node..node2 2020-04-08 18:01:55.792699954 +0000
## -18,8 +18,7 ##
kubernetes.io/arch: amd64
kubernetes.io/hostname: node2
kubernetes.io/os: linux
- node-role.kubernetes.io/santa: "" # <-- removed as desired
- node-role.kubernetes.io/worker: "" # <-- removed as desired, literally replaced with the following label
+ node-role.kubernetes.io/infra: "" # <-- created as desired
name: node2
resourceVersion: "60978298"
selfLink: /api/v1/nodes/node2
exit status 1
Here are a few more examples:
# Check the original label ( last filter removes last applied config annotation line)
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/infra: ""
# Replace the label "infra" with "worker" using kubectl replace syntax
$ kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/infra: ""#node-role.kubernetes.io/worker: ""#' | kubectl replace -f -
node/node2 replaced
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/worker: ""
# label replaced -------^^^^^^
# Replace the label "worker" back to "infra" using kubectl apply syntax
$ kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/worker: ""#node-role.kubernetes.io/infra: ""#' | kubectl apply -f -
node/node2 configured
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/infra: ""
# label replaced -------^^^^^
# Remove the label from the node ( for demonstration purpose)
$ kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/infra: ""##' | kubectl apply -f -
node/node2 configured
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
# empty output
# label "infra" has been removed
You may see the following warning when you use kubectl apply -f on the resource created using imperative commands like kubectl create or kubectl expose for the first time:
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
In this case last-applied-configuration annotation will be created with the content of the file used in kubectl apply -f filename.yaml command. It may not contain all parameters and labels that are present in the live object.

Is there a way to share a configMap in kubernetes between namespaces?

We are using one namespace for the develop environment and one for the staging environment. Inside each one of this namespaces we have several configMaps and secrets but there are a lot of share variables between the two environments so we will like to have a common file for those.
Is there a way to have a base configMap into the default namespace and refer to it using something like:
- envFrom:
- configMapRef:
name: default.base-config-map
If this is not possible, is there no other way other than duplicate the variables through namespaces?
Kubernetes 1.13 and earlier
They cannot be shared, because they cannot be accessed from a pods outside of its namespace. Names of resources need to be unique within a namespace, but not across namespaces.
Workaround it is to copy it over.
Copy secrets between namespaces
kubectl get secret <secret-name> --namespace=<source-namespace> --export -o yaml \
| kubectl apply --namespace=<destination-namespace> -f -
Copy configmaps between namespaces
kubectl get configmap <configmap-name>  --namespace=<source-namespace> --export -o yaml \
| kubectl apply --namespace=<destination-namespace> -f -
Kubernetes 1.14+
The --export flag was deprecated in 1.14
Instead following command can be used:
kubectl get secret <secret-name> --namespace=<source-namespace>  -o yaml \
| sed 's/namespace: <from-namespace>/namespace: <to-namespace>/' \
| kubectl create -f -
If someone still see a need for the flag, there’s an export script written by #zoidbergwill.
Please use the following command to copy from one namespace to another
kubectl get configmap <configmap-name> -n <source-namespace> -o yaml | sed 's/namespace: <source-namespace>/namespace: <dest-namespace>/' | kubectl create -f -
kubectl get secret <secret-name> -n <source-namespace> -o yaml | sed 's/namespace: <source-namespace>/namespace: <dest-namespace>/' | kubectl create -f -

Kubectl update configMap

I am using the following command to create a configMap.
kubectl create configmap test --from-file=./application.properties --from-file=./mongo.properties --from-file=./logback.xml
Now, I have modified a value for a key from mongo.properties which i need to update in kubernetes.
Option1 :-
kubectl edit test
Here, it opens the entire file. But, I want to just update mongo.properties and hence want to see only the mongo.properties. Is there any other way?
Note :- I dont want to have mongo.properties in a separate configMap.
Thanks
Now you can. Just throw: kubectl edit configmap <name of the configmap> on your command line. Then you can edit your configuration.
Another option is actually you can use this command:
kubectl create configmap some-config \
--from-file=some-key=some-config.yaml \
-n some-namespace \
-o yaml \
--dry-run | kubectl apply -f -
Refer to Github issue: Support updating config map and secret with --from-file
kubectl edit configmap -n <namespace> <configMapName> -o yaml
This opens up a vim editor with the configmap in yaml format. Now simply edit it and save it.
Here's a neat way to do an in-place update from a script.
The idea is;
export the configmap to YAML (kubectl get cm -o yaml)
use sed to do a command-line replace of an old value with a new value (sed "s|from|to")
push it back to the cluster using kubectl apply
In this worked example, I'm updating a log level variable from 'info' level logging to 'warn' level logging.
So, step 1, read the current config;
$ kubectl get cm common-config -o yaml
apiVersion: v1
data:
CR_COMMON_LOG_LEVEL: info
kind: ConfigMap
Step 2, you modify it locally with a regular expression search-and-replace, using sed:
$ kubectl get cm common-config -o yaml | \
sed -e 's|CR_COMMON_LOG_LEVEL: info|CR_COMMON_LOG_LEVEL: warn|'
apiVersion: v1
data:
CR_COMMON_LOG_LEVEL: warn
kind: ConfigMap
You can see the value has changed. Let's push it back up to the cluster;
Step 3; use kubectl apply -f -, which tells kubectl to read from stdin and apply it to the cluster;
$ kubectl get cm common-config -o yaml | \
sed -e 's|CR_COMMON_LOG_LEVEL: info|CR_COMMON_LOG_LEVEL: warn|' | \
kubectl apply -f -
configmap/common-config configured
No, you can't.
Replace in kubernetes will simply replace everything in that configmap. You can't just update one file or one single property in it.
However, if you check with the client Api, you will find if you create a configmap with lots of files. Then, those files will be stored as a HashMap, where key is file name by default, value is the file content encoded as a string. So you can write your own function based on existing key-value pair in HashMap.
This is what I found so far, if you find there is already existing method to deal with this issue, please let me know :)
FYI, if you want to update just one or few properties, it is possible if you use patch. However, it is a little bit hard to implement.
this and this may help
Here is how you can add/modify/remove files in a configmap with some help from jq:
export configmap to a JSON file:
CM_FILE=$(mktemp -d)/config-map.json
oc get cm <configmap name> -o json > $CM_FILE
DATA_FILES_DIR=$(mktemp -d)
files=$(cat $CM_FILE | jq '.data' | jq -r 'keys[]')
for k in $files; do
name=".data[\"$k\"]"
cat $CM_FILE | jq -r $name > $DATA_FILES_DIR/$k;
done
add/modify a file:
echo '<paste file contents here>' > $DATA_FILES_DIR/<file name>.conf
remove a file:
rm <file name>.conf
when done, update the configmap:
kubectl create configmap <configmap name> --from-file $DATA_FILES_DIR -o yaml --dry-run | kubectl apply -f -
delete temporary files and folders:
rm -rf CM_FILE
rm -rf DATA_FILES_DIR
Here is a complete shell script to add new file to configmap (or replace existing one) based on #Bruce S. answer https://stackoverflow.com/a/54876249/2862663
#!/bin/bash
# Requires jq to be installed
if [ -z "$1" ]
then
echo "usage: update-config-map.sh <config map name> <config file to add>"
return
fi
if [ -z "$2" ]
then
echo "usage: update-config-map.sh <config map name> <config file to add>"
return
fi
CM_FILE=$(mktemp -d)/config-map.json
kubectl get cm $1 -o json > $CM_FILE
DATA_FILES_DIR=$(mktemp -d)
files=$(cat $CM_FILE | jq '.data' | jq -r 'keys[]')
for k in $files; do
name=".data[\"$k\"]"
cat $CM_FILE | jq -r $name > $DATA_FILES_DIR/$k;
done
echo cunfigmap: $CM_FILE tempdir: $DATA_FILES_DIR
echo will add file $2 to config
cp $2 $DATA_FILES_DIR
kubectl create configmap $1 --from-file $DATA_FILES_DIR -o yaml --dry-run | kubectl apply -f -
echo Done
echo removing temp dirs
rm -rf $CM_FILE
rm -rf $DATA_FILES_DIR
Suggestion
I would highly consider using a CLI editor like k9s (which is more like a K8S CLI managment tool).
As you can see below (ignore all white placeholders), when your cluster's context is set on terminal you just type k9s and you will reach a nice terminal where you can inspect all cluster resources.
Just type ":" and enter the resource name (configmaps in our case) which will appear in the middle of screen (green rectangle).
Then you can choose the relevant configmap with the up and down arrows and type e to edit it (see green arrow).
For all Configmaps in all namespaces you choose 0, for a specific namespace you choose the number from the upper left menu - for example 1 for kube-system:
I managed to update a setting ("large-client-header-buffers") in the nginx pod's /etc/nginx/nginx.conf via configmap. Here are the steps I have followed..
Find the configmap name in the nginx ingress controller pod describition
kubectl -n utility describe pods/test-nginx-ingress-controller-584dd58494-d8fqr |grep configmap
--configmap=test-namespace/test-nginx-ingress-controller
Note: In my case, the namespace is "test-namespace" and the configmap name is "test-nginx-ingress-controller"
Create a configmap yaml
cat << EOF > test-nginx-ingress-controller-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: test-nginx-ingress-controller
namespace: test-namespace
data:
large-client-header-buffers: "4 16k"
EOF
Note: Please replace the namespace and configmap name as per finding in the step 1
Deploy the configmap yaml
kubectl apply -f test-nginx-ingress-controller-configmap.yaml
Then you will see the change is updated to nginx controller pod after mins
i.g.
kubectl -n test-namespace exec -it test-nginx-ingress-controller-584dd58494-d8fqr -- cat /etc/nginx/nginx.conf|grep large
large_client_header_buffers 4 16k;
Thanks to the sharing by NeverEndingQueue in How to use ConfigMap configuration with Helm NginX Ingress controller - Kubernetes