how to execute an argument in kubernetes? - kubernetes

I have misunderstanding with how to execute $() commands in exec. i'm creating a job in kubernetes with this params:
command:
- ./kubectl
- -n
- $MONGODB_NAMESPACE
- exec
- -ti
- $(kubectl
- -n
- $MONGODB_NAMESPACE
- get
- pods
- --selector=app=$MONGODB_CONTAINER_NAME
- -o
- jsonpath='{.items[*].metadata.name}')
- --
- /opt/mongodb-maintenance.sh
but the part with $(kubectl -n ... --selector ...) is treated as a string and don't execute. Please tell me how to do it properly. Thanks!

As far as I know this is not achievable by putting each section as an array element. Instead you can do something like the following:
command:
- /bin/sh
- -c
- |
./kubectl -n $MONGODB_NAMESPACE exec -ti $(kubectl -n $MONGODB_NAMESPACE get pods --selector=app=$MONGODB_CONTAINER_NAME -o jsonpath='{.items[*].metadata.name}') -- /opt/mongodb-maintenance.sh

From the output of the kubectl exec, I noticed that you can use -- to separate your arguments
# List contents of /usr from the first container of pod 123456-7890 and sort by modification time.
# If the command you want to execute in the pod has any flags in common (e.g. -i),
# you must use two dashes (--) to separate your command's flags/arguments.
# Also note, do not surround your command and its flags/arguments with quotes
# unless that is how you would execute it normally (i.e., do ls -t /usr, not "ls -t /usr").
kubectl exec 123456-7890 -i -t -- ls -t /usr

Related

Copy a configmap but with another name in the same namespace (kubernetes/kubectl)

I have a configmap my-config in a namespace and need to make a copy (part of some temporary experimentation) but with another name so I end up with :
my-config
my-config-copy
I can do this with:
kubectl get cm my-config -o yaml > my-config-copy.yaml
edit the name manually followed by:
kubectl create -f my-config-copy.yaml
But is there a way to do it automatically in one line?
I can get some of the way with:
kubectl get cm my-config --export -o yaml | kubectl apply -f -
but I am missing the part with the new name (since names are immutable I know this is not standard behavior).
Also preferably without using export since:
Flag --export has been deprecated, This flag is deprecated and will be removed in future.
Any suggestions?
You can achieve this by combining kubectl's patch and apply functions.
kubectl patch cm source-cm -p '{"metadata":{ "name":"target-cm"}}' --dry-run=client -o yaml | kubectl apply -f -
source-cm and target-cm are the config map names

Any way to autocomplete multiple objects in a kubectl command?

k8s kubectl autocomplete is great, until you want to run a command for multiple things.
For example, I want to delete 2 pods, I can run:
k delete pod foo-12345 bar-67890
But I can only autocomplete with foo. What about bar<tab>?
It is in the best case a partial workaround. After a few test, I can say that your goal can be achieved, but it have some cons. There might be a 3rd party solution but I am not aware of any.
bash-completion
'bash-completion' package is required for making kubectl shell completion work as expected. You can install it using apt-get install bash-completionmo. More information can be found in Kubernetes Documentation Optional kubectl configurations.
Debugging
Bash-completion has it's own syntax and functions. For debugging purpose you can run export BASH_COMP_DEBUG_FILE=$HOME/compdebug.txt. It will create compdebug text file and send all debug output from kubectl shell completion functions to it. Example output below:
__kubectl_parse_get: get completion by kubectl get -o template --template="{{ range .items }}{{ .metadata.name }} {{ end }}" "first-deployment-85b75bf4f9-mn8zh"
__kubectl_handle_word: c is 0 words[c] is kubectl
__kubectl_handle_command: c is 0 words[c] is kubectl
__kubectl_handle_command: looking for _kubectl_root_command
__kubectl_handle_word: c is 1 words[c] is get
__kubectl_handle_command: c is 1 words[c] is get
__kubectl_handle_command: looking for _kubectl_get
__kubectl_handle_word: c is 2 words[c] is pod
__kubectl_handle_noun: c is 2 words[c] is pod
__kubectl_handle_reply
__kubectl_parse_get: get completion by kubectl get -o template --template="{{ range .items }}{{ .metadata.name }} {{ end }}" "pod"
How it works
kubectl doesn't complete more than one object, because it's autocomplete function runs sub-request kubectl get argN to get list of objects, and argN is the previous arguments of the existing command line. When you use it first time, it takes arguments from kubectl command pod->argN and run kubectl get pod. The second time it takes arguments from kubectl command pod podname1->argN, so the sub-request looks like kubectl get podname1 instead of kubectl get pod which cause en error and empty output instead of list of the objects.
Test Scenario
To achieve this script you can use command kubectl completion bash > k8scompletion.sh.
It's good to create second completion script that you can rollback to default settings - kubectl completion bash > k8scompletion-copy.sh.
$ vi k8scompletion.sh
In function __kubectl_get_resource() I've edited __kubectl_parse_get "${nouns[${#nouns[#]} -1]}" to __kubectl_parse_get "${nouns[0]}"
__kubectl_get_resource()
{
if [[ ${#nouns[#]} -eq 0 ]]; then
local kubectl_out
if kubectl_out=$(__kubectl_debug_out "kubectl api-resources $(__kubectl_override_flags) -o name --cached --request-timeout=5s --verbs=get"); then
COMPREPLY=( $( compgen -W "${kubectl_out[*]}" -- "$cur" ) )
return 0
fi
return 1
fi
__kubectl_parse_get "${nouns[0]}"
}
Script adjustment overview
Adjusted script allows you to complete kubernetes resources and all objects from this resource. The following workaround is enough for demonstration and solving the mentioned in the question problem, but can cause some side effects, so please pay attention to the results you get.
Side Note
Shell completion script varies from one kubectl version to another, thus it is hard to create universal patch.
Test Output
$ kubectl delete <TAB>
apiservices.apiregistration.k8s.io nodes.metrics.k8s.io
backendconfigs.cloud.google.com persistentvolumeclaims
certificatesigningrequests.certificates.k8s.io persistentvolumes
clusterrolebindings.rbac.authorization.k8s.io poddisruptionbudgets.policy
clusterroles.rbac.authorization.k8s.io pods
componentstatuses podsecuritypolicies.policy
configmaps pods.metrics.k8s.io
controllerrevisions.apps podtemplates
cronjobs.batch priorityclasses.scheduling.k8s.io
csidrivers.storage.k8s.io replicasets.apps
... and few others
$ kubectl delete pod<TAB>
poddisruptionbudgets.policy pods podsecuritypolicies.policy pods.metrics.k8s.io podtemplates
$ kubectl delete pod <TAB><TAB>
httpd-deploy-1-6c4b998b99-jk876 httpd-deploy-6867dfd79c-tr648 nginx2 nginx-deploy-2-94985d7bd-bdb4d
httpd-deploy-2-64dc95c468-s7vt2 nginx nginx-deploy-1-5494687955-sm5lh nginx-deploy-85df977897-44lcn
$ kubectl get pod nginx<TAB>
nginx nginx2 nginx-deploy-1-5494687955-sm5lh nginx-deploy-2-94985d7bd-bdb4d nginx-deploy-85df977897-44lcn
$ kubectl get pod nginx-deploy-<TAB>
nginx-deploy-1-5494687955-sm5lh nginx-deploy-2-94985d7bd-bdb4d nginx-deploy-85df977897-44lcn
$ kubectl get pod nginx-deploy-1<TAB>
###It autocomplete below after clicking on tab to nginx-deploy-1-5494687955-sm5lh
$ kubectl get pod nginx-deploy-1-5494687955-sm5lh <TAB>
httpd-deploy-1-6c4b998b99-jk876 httpd-deploy-6867dfd79c-tr648 nginx2 nginx-deploy-2-94985d7bd-bdb4d
httpd-deploy-2-64dc95c468-s7vt2 nginx nginx-deploy-1-5494687955-sm5lh nginx-deploy-85df977897-44lcn
$ kubectl delete pod nginx-deploy-1-5494687955-sm5lh nginx<TAB>
nginx nginx2 nginx-deploy-1-5494687955-29vqs nginx-deploy-2-94985d7bd-bdb4d nginx-deploy-85df977897-44lcn
$ kubectl delete pod nginx-deploy-1-5494687955-sm5lh nginx2 <TAB>
httpd-deploy-1-6c4b998b99-jk876 httpd-deploy-6867dfd79c-tr648 nginx2 nginx-deploy-2-94985d7bd-bdb4d
httpd-deploy-2-64dc95c468-s7vt2 nginx nginx-deploy-1-5494687955-29vqs nginx-deploy-85df977897-44lcn
$ kubectl delete pod nginx-deploy-1-5494687955-sm5lh nginx2
Rollback changes
To apply this specific completion script, you have to use source command - source k8scompletion.sh or source k8scompletion-copy.sh.

What are production uses for Kubernetes pods without an associated deployment?

I have seen the one-pod <-> one-container rule, which seems to apply to business logic pods, but has exceptions when it comes to shared network/volume related resources.
What are encountered production uses of deploying pods without a deployment configuration?
I use pods directly to start a Centos (or other operating system) container in which to verify connections or test command line options.
As a specific example, below is a shell script that starts an ubuntu container. You can easily modify the manifest to test secret access or change the service account to test access control.
#!/bin/bash
RANDOMIZER=$(uuid | cut -b-5)
POD_NAME="bash-shell-$RANDOMIZER"
IMAGE=ubuntu
NAMESPACE=$(uuid)
kubectl create namespace $NAMESPACE
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: $POD_NAME
namespace: $NAMESPACE
spec:
containers:
- name: $POD_NAME
image: $IMAGE
command: ["/bin/bash"]
args: ["-c", "while true; do date; sleep 5; done"]
hostNetwork: true
dnsPolicy: Default
restartPolicy: Never
EOF
echo "---------------------------------"
echo "| Press ^C when pod is running. |"
echo "---------------------------------"
kubectl -n $NAMESPACE get pod $POD_NAME -w
echo
kubectl -n $NAMESPACE exec -it $POD_NAME -- /bin/bash
kubectl -n $NAMESPACE delete pod $POD_NAME
kubectl delete namespace $NAMESPACE
In our case, we use stand alone pods for debugging purposes only.
Otherwise you want your configuration to be stateless and written in YAML files.
For instance, debugging the dns resolution: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
kubectl exec -i -t dnsutils -- nslookup kubernetes.default

Is there a way to share a configMap in kubernetes between namespaces?

We are using one namespace for the develop environment and one for the staging environment. Inside each one of this namespaces we have several configMaps and secrets but there are a lot of share variables between the two environments so we will like to have a common file for those.
Is there a way to have a base configMap into the default namespace and refer to it using something like:
- envFrom:
- configMapRef:
name: default.base-config-map
If this is not possible, is there no other way other than duplicate the variables through namespaces?
Kubernetes 1.13 and earlier
They cannot be shared, because they cannot be accessed from a pods outside of its namespace. Names of resources need to be unique within a namespace, but not across namespaces.
Workaround it is to copy it over.
Copy secrets between namespaces
kubectl get secret <secret-name> --namespace=<source-namespace> --export -o yaml \
| kubectl apply --namespace=<destination-namespace> -f -
Copy configmaps between namespaces
kubectl get configmap <configmap-name>  --namespace=<source-namespace> --export -o yaml \
| kubectl apply --namespace=<destination-namespace> -f -
Kubernetes 1.14+
The --export flag was deprecated in 1.14
Instead following command can be used:
kubectl get secret <secret-name> --namespace=<source-namespace>  -o yaml \
| sed 's/namespace: <from-namespace>/namespace: <to-namespace>/' \
| kubectl create -f -
If someone still see a need for the flag, there’s an export script written by #zoidbergwill.
Please use the following command to copy from one namespace to another
kubectl get configmap <configmap-name> -n <source-namespace> -o yaml | sed 's/namespace: <source-namespace>/namespace: <dest-namespace>/' | kubectl create -f -
kubectl get secret <secret-name> -n <source-namespace> -o yaml | sed 's/namespace: <source-namespace>/namespace: <dest-namespace>/' | kubectl create -f -

Kubectl update configMap

I am using the following command to create a configMap.
kubectl create configmap test --from-file=./application.properties --from-file=./mongo.properties --from-file=./logback.xml
Now, I have modified a value for a key from mongo.properties which i need to update in kubernetes.
Option1 :-
kubectl edit test
Here, it opens the entire file. But, I want to just update mongo.properties and hence want to see only the mongo.properties. Is there any other way?
Note :- I dont want to have mongo.properties in a separate configMap.
Thanks
Now you can. Just throw: kubectl edit configmap <name of the configmap> on your command line. Then you can edit your configuration.
Another option is actually you can use this command:
kubectl create configmap some-config \
--from-file=some-key=some-config.yaml \
-n some-namespace \
-o yaml \
--dry-run | kubectl apply -f -
Refer to Github issue: Support updating config map and secret with --from-file
kubectl edit configmap -n <namespace> <configMapName> -o yaml
This opens up a vim editor with the configmap in yaml format. Now simply edit it and save it.
Here's a neat way to do an in-place update from a script.
The idea is;
export the configmap to YAML (kubectl get cm -o yaml)
use sed to do a command-line replace of an old value with a new value (sed "s|from|to")
push it back to the cluster using kubectl apply
In this worked example, I'm updating a log level variable from 'info' level logging to 'warn' level logging.
So, step 1, read the current config;
$ kubectl get cm common-config -o yaml
apiVersion: v1
data:
CR_COMMON_LOG_LEVEL: info
kind: ConfigMap
Step 2, you modify it locally with a regular expression search-and-replace, using sed:
$ kubectl get cm common-config -o yaml | \
sed -e 's|CR_COMMON_LOG_LEVEL: info|CR_COMMON_LOG_LEVEL: warn|'
apiVersion: v1
data:
CR_COMMON_LOG_LEVEL: warn
kind: ConfigMap
You can see the value has changed. Let's push it back up to the cluster;
Step 3; use kubectl apply -f -, which tells kubectl to read from stdin and apply it to the cluster;
$ kubectl get cm common-config -o yaml | \
sed -e 's|CR_COMMON_LOG_LEVEL: info|CR_COMMON_LOG_LEVEL: warn|' | \
kubectl apply -f -
configmap/common-config configured
No, you can't.
Replace in kubernetes will simply replace everything in that configmap. You can't just update one file or one single property in it.
However, if you check with the client Api, you will find if you create a configmap with lots of files. Then, those files will be stored as a HashMap, where key is file name by default, value is the file content encoded as a string. So you can write your own function based on existing key-value pair in HashMap.
This is what I found so far, if you find there is already existing method to deal with this issue, please let me know :)
FYI, if you want to update just one or few properties, it is possible if you use patch. However, it is a little bit hard to implement.
this and this may help
Here is how you can add/modify/remove files in a configmap with some help from jq:
export configmap to a JSON file:
CM_FILE=$(mktemp -d)/config-map.json
oc get cm <configmap name> -o json > $CM_FILE
DATA_FILES_DIR=$(mktemp -d)
files=$(cat $CM_FILE | jq '.data' | jq -r 'keys[]')
for k in $files; do
name=".data[\"$k\"]"
cat $CM_FILE | jq -r $name > $DATA_FILES_DIR/$k;
done
add/modify a file:
echo '<paste file contents here>' > $DATA_FILES_DIR/<file name>.conf
remove a file:
rm <file name>.conf
when done, update the configmap:
kubectl create configmap <configmap name> --from-file $DATA_FILES_DIR -o yaml --dry-run | kubectl apply -f -
delete temporary files and folders:
rm -rf CM_FILE
rm -rf DATA_FILES_DIR
Here is a complete shell script to add new file to configmap (or replace existing one) based on #Bruce S. answer https://stackoverflow.com/a/54876249/2862663
#!/bin/bash
# Requires jq to be installed
if [ -z "$1" ]
then
echo "usage: update-config-map.sh <config map name> <config file to add>"
return
fi
if [ -z "$2" ]
then
echo "usage: update-config-map.sh <config map name> <config file to add>"
return
fi
CM_FILE=$(mktemp -d)/config-map.json
kubectl get cm $1 -o json > $CM_FILE
DATA_FILES_DIR=$(mktemp -d)
files=$(cat $CM_FILE | jq '.data' | jq -r 'keys[]')
for k in $files; do
name=".data[\"$k\"]"
cat $CM_FILE | jq -r $name > $DATA_FILES_DIR/$k;
done
echo cunfigmap: $CM_FILE tempdir: $DATA_FILES_DIR
echo will add file $2 to config
cp $2 $DATA_FILES_DIR
kubectl create configmap $1 --from-file $DATA_FILES_DIR -o yaml --dry-run | kubectl apply -f -
echo Done
echo removing temp dirs
rm -rf $CM_FILE
rm -rf $DATA_FILES_DIR
Suggestion
I would highly consider using a CLI editor like k9s (which is more like a K8S CLI managment tool).
As you can see below (ignore all white placeholders), when your cluster's context is set on terminal you just type k9s and you will reach a nice terminal where you can inspect all cluster resources.
Just type ":" and enter the resource name (configmaps in our case) which will appear in the middle of screen (green rectangle).
Then you can choose the relevant configmap with the up and down arrows and type e to edit it (see green arrow).
For all Configmaps in all namespaces you choose 0, for a specific namespace you choose the number from the upper left menu - for example 1 for kube-system:
I managed to update a setting ("large-client-header-buffers") in the nginx pod's /etc/nginx/nginx.conf via configmap. Here are the steps I have followed..
Find the configmap name in the nginx ingress controller pod describition
kubectl -n utility describe pods/test-nginx-ingress-controller-584dd58494-d8fqr |grep configmap
--configmap=test-namespace/test-nginx-ingress-controller
Note: In my case, the namespace is "test-namespace" and the configmap name is "test-nginx-ingress-controller"
Create a configmap yaml
cat << EOF > test-nginx-ingress-controller-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: test-nginx-ingress-controller
namespace: test-namespace
data:
large-client-header-buffers: "4 16k"
EOF
Note: Please replace the namespace and configmap name as per finding in the step 1
Deploy the configmap yaml
kubectl apply -f test-nginx-ingress-controller-configmap.yaml
Then you will see the change is updated to nginx controller pod after mins
i.g.
kubectl -n test-namespace exec -it test-nginx-ingress-controller-584dd58494-d8fqr -- cat /etc/nginx/nginx.conf|grep large
large_client_header_buffers 4 16k;
Thanks to the sharing by NeverEndingQueue in How to use ConfigMap configuration with Helm NginX Ingress controller - Kubernetes