How can I login to any pod within any namespace in kubernetes and run any command? [closed] - kubernetes

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
My requirement is to login into pod without selecting the namespace and the pod name, then providing a command to be run inside it. There was no thread or question that answered the query. Below is a solution that I came up with. If there can be improvements in the same, please provide those.
What I was looking for to build automation or a krew plugin for logging into any pod in any namespace and run any command inside it. It is purely automation based, there are plenty of plugins that do the same work like k9s and other krew plugins but my requirement was more of a generic based and lightweight which could be implemented in a Pipeline without any third-party tools.

Create a Shell Script
login.sh
#$1 is namespace $2 is the row number where the pod is located $3 is the command to run inside the pod
#$3 is command can be bash,sh or some command like echo "test"
#Example ./login.sh test-ns 1 bash or ./login.sh test-ns 1 'echo "test"'
pod=$(kubectl get pods -n $1 | grep -v NAME | awk -v i=1 -v j=$2 'FNR == j {print $i}')
kubectl exec -it $pod -n $1 -- $3
How to use it?
login.sh $namespace $pod_number $command
1. Create the above script, make it executable and copy it in /usr/local/bin directory or in the $PATH variable
chmod +x login.sh
#Check if /usr/local/bin exists in PATH variable, if not then run the export command
echo $PATH
export PATH=$PATH:/usr/local/bin
cp -p login.sh /usr/local/bin/login.sh
2. Run kubectl get pods to get the pod number in which you want to the command to run
helloworld-v1-5dfcf5d5cd-v7xtw 1/1 Running 0 8d
httpbin-56db79f4f5-j2cxz 1/1 Running 1 8d
test-cc5b6bfd-2hhmr 1/1 Running 0 39h
And I want to exec into the second pod with the shell being bash
3. Run the command to use the script
login.sh default 2 bash
Demo Output
The output would look something like:
shubham.yadav#my-MAC:~/k8s $ ./login.sh default 2 bash
root#httpbin-56db79f4f5-j2cxz:/#
Another example just to make implementation more clear
shubham.yadav#my-MAC:~/k8s $ login.sh qa-test 2 'echo "From the pod Login Succeeded"'
"From the pod Login Succeeded"
Note: In case you have a large number of pods in the namespace you can use NR utility of awk to get the column numbers of the pods.
kp is an alias for kubectl get pods
kp | grep -v NAME | awk '{print NR, $1}'
1 details-v1-78d78fbddf-zksf7
2 helloworld-v1-5dfcf5d5cd-wl2nx
3 httpbin-56db79f4f5-wj4ql
4 load-generator-5cdbd66865-fxpbm
5 productpage-v1-85b9bf9cd7-qp79l
6 ratings-v1-6c9dbf6b45-xm6ww
7 reviews-v1-564b97f875-cx86c
8 reviews-v2-568c7c9d8f-xxc86
9 reviews-v3-67b4988599-p98ft
10 test-cc5b6bfd-68tqg
11 test-cc5b6bfd-8q694
12 unset-deployment-7896c75bf6-5w27b
13 web-v1-fc4d58bdc-pcv9p
14 web-v2-7bf5dd654d-684t9
15 web-v3-7567d5d6b9-sqrpg
Creating a Binary for Shell Script
In case if you are interested in creating the binary for the above script, you can follow the following link which lets you create a binary for your above shell script.
You can also change the binary name to a more friendly name from login.sh to just execute
shubham.yadav#my-MAC:/usr/local/bin $ mv login.sh execute
shubham.yadav#my-MAC:/usr/local/bin $ execute qa-test 2 'echo "Binary Name Changed"'
"Binary Name Changed"

Related

Kubectl appears to be discarding standard output

I'm trying to copy the contents of a large (~350 files, ~40MB total) directory from a Kubernetes pod to my local machine. I'm using the technique described here.
Sometimes it succeeds, but very frequently the standard output piped to the tar xf command on my host appears to get truncated. When that happens, I see errors like:
<some file in the archive being transmitted over the pipe>: Truncated tar archive
The files in the source directory don't change. The file in the error message is usually different (ie: it appears to be truncated in a different place).
For reference (copied from the document lined to above), this is the analog to what I'm trying to do (I'm using a different pod name and directory names):
kubectl exec -n my-namespace my-pod -- tar cf - /tmp/foo | tar xf - -C /tmp/bar
After running it, I expect the contents of my local /tmp/bar to be the same as those in the pod.
However, more often than not, it fails. My current theory (I have a very limited understanding of how kubectl works, so this is all speculation) is that when kubectl determines that the tar command has completed, it terminates -- regardless of whether or not there are remaining bytes in transit (over the network) containing the contents of standard output.
I've tried various combinations of:
stdbuf
Changing tar's blocking factor
Making the command take longer to run (by adding && sleep <x>)
I'm not going to list all combinations I've tried, but this is an example that uses everything:
kubectl exec -n my-namespace my-pod -- stdbuf -o 0 tar -b 1 -c -f - -C /tmp/foo . && sleep 2 | tar xf - -C /tmp/bar
There are combinations of that command that I can make work pretty reliably. For example, forgetting about stdbuf and -b 1 and just sleeping for 100 seconds, ie:
kubectl exec -n my-namespace my-pod -- tar -c -f - -C /tmp/foo . && sleep 100 | tar xf - -C /tmp/bar
But even more experimentation led me to believe that the block size of tar (512 bytes, I believe?) was still too large (the arguments of -b are a count of blocks, not the size of those blocks). This is the command I'm using for now:
kubectl exec -n my-namespace my-pod -- bash -c 'dd if=<(tar cf - -C /tmp/foo .) bs=16 && sleep 10' | tar xf - -C /tmp/bar
And yes, I HAD to make bs that small and sleep "that big" to make it work. But this at least gives me two variables I can mess with. I did find that if I set bs=1, I didn't have to sleep... but it took a LONG time to move all the data (one byte at a time).
So, I guess my questions are:
Is my theory that kubectl truncates standard output after it determines the command given to exec has finished correct?
Is there a better solution to this problem?
Maybe you haven't been specific enough for kubectl regarding what the full command that it must contend with really is. There might be ambiguity as to who should be responsible for the pipe process. The "--" probably doesn't direct kubectl to include that as part of the command. That is probably being intercepted by the shell.
Have you tried wrapping all of it in double-quotes ?
CMD="tar cf - /tmp/foo | tar xf - -C /tmp/bar"
kubectl exec -n my-namespace my-pod -- "${CMD}"
That way it would include the scope of saving at the target as part of the process to monitor for completion.

searching for a keyword in all the pods/replicas of a kuberntes deployment

I am running a deployment called mydeployment that manages several pods/replicas for a certain service. I want to search all the service pods/instances/replicas of that deployment for a certain keyword. The command below defaults to one replica and returns the keyword matching in this replica only.
Kubectl logs -f deploy/mydeployment | grep "keyword"
Is it possible to customize the above command to return all the matching keywords out of all instances/pods of the deployment mydeployment? Any hint?
Save this to a file fetchLogs.sh file, and if you are using Linux box, use sh fetchLogs.sh
#!/bin/sh
podName="key-word-from-pod-name"
keyWord="actual-log-search-keyword"
nameSpace="name-space-where-the-pods-or-running"
echo "Running Script.."
for podName in `kubectl get pods -A -o name | grep -i ${podName} | cut -d'/' -f2`;
do
echo "searching pod ${podName}"
kubectl -n ${nameSpace} logs pod/${podName} | grep -i ${keyWord}
done
I used the pods, if you want to use deployment, the idea is same change the kubectl command accordingly.

Running script from Linux shell inside a Kubernetes pod

Team,
I need to execute a shell script that is within a kubernetes pod. However the call needs to come from outside the pod. Below is the script for your reference:
echo 'Enter Namespace: '; read namespace; echo $namespace;
kubectl exec -it `kubectl get po -n $namespace|grep -i podName|awk '{print $1}'` -n $namespace --- {scriptWhichNeedToExecute.sh}
Can anyone suggest on how to do this?`
There isn't really a good way. A simple option might be cat script.sh | kubectl exec -i -- bash but that can have weird side effects. The more correct solution would be to use a debug container but that feature is still in alpha right now.

Kubernetes - delete all jobs in bulk

I can delete all jobs inside a custer running
kubectl delete jobs --all
However, jobs are deleted one after another which is pretty slow (for ~200 jobs I had the time to write this question and it was not even done).
Is there a faster approach ?
It's a little easier to setup an alias for this bash command:
kubectl delete jobs `kubectl get jobs -o custom-columns=:.metadata.name`
I have a script for deleting which was quite faster in deleting:
$ cat deljobs.sh
set -x
for j in $(kubectl get jobs -o custom-columns=:.metadata.name)
do
kubectl delete jobs $j &
done
And for creating 200 jobs used following script with the command for i in {1..200}; do ./jobs.sh; done
$ cat jobs.sh
kubectl run memhog-$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 8 | head -n 1) --restart=OnFailure --record --image=derekwaynecarr/memhog --command -- memhog -r100 20m
If you are using CronJob and those are piling up quickly, you can let kubernetes delete them automatically by configuring job history limit described in documentation. That is valid starting from version 1.6.
...
spec:
...
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
This works really well for me:
kubectl delete jobs $(kubectl get jobs -o custom-columns=:.metadata.name)
There is an easier way to do it:
To delete successful jobs:
kubectl delete jobs --field-selector status.successful=1
To delete failed or long-running jobs:
kubectl delete jobs --field-selector status.successful=0
I use this script, it's fast but it can trash CPU (a process per job), you can always adjust the sleep parameter:
#!/usr/bin/env bash
echo "Deleting all jobs (in parallel - it can trash CPU)"
kubectl get jobs --all-namespaces | sed '1d' | awk '{ print $2, "--namespace", $1 }' | while read line; do
echo "Running with: ${line}"
kubectl delete jobs ${line} &
sleep 0.05
done
The best way for me is (for completed jobs older than a day):
kubectl get jobs | grep 1/1 | gawk 'match($0, / ([0-9]*)h/, ary) { if(ary[1]>24) print $1}' | parallel -r --bar -P 32 kubectl delete jobs
grep 1/1 for completed jobs
gawk 'match($0, / ([0-9]*)h/, ary) { if(ary[1]>24) print $1}' for jobs older than a day
-P number of parallel processes
It is faster than kubectl delete jobs --all, has a progress bar and you can use it when some jobs are still running.
kubectl delete jobs --all --cascade=false is fast, but won't delete associated resources, such as Pods
https://github.com/kubernetes/kubernetes/issues/8598
Parallelize using GNU parallel
parallel --jobs=5 "echo {}; kubectl delete jobs {} -n core-services;" ::: $(kubectl get job -o=jsonpath='{.items[?(#.status.succeeded==1)].metadata.name}' -n core-services)
kubectl get jobs -o custom-columns=:.metadata.name | grep specific* | xargs kubectl delete jobs
kubectl get jobs -o custom-columns=:.metadata.name gives you list of jobs name | then you can grep specific that you need with regexp | then xargs use output to delete one by one from the list.
Probably, there's no other way to delete all job at once,because even kubectl delete jobs also queries one job at a time, what Norbert van Nobelen suggesting might get faster result, but it will make much difference.
Kubectl bulk (bulk-action on krew) plugin may be useful for you, it gives you bulk operations on selected resources.
This is the command for deleting jobs
' kubectl bulk jobs delete '
You could check details in
https://github.com/emreodabas/kubectl-plugins/blob/master/README.md#kubectl-bulk-aka-bulk-action

howto: elastic beanstalk + deploy docker + graceful shutdown

Hi great people of stackoverflow,
Were hosting a docker container on EB with an nodejs based code running on it.
When redeploying our docker container we'd like the old one to do a graceful shutdown.
I've found help & guides on how our code could receive a sigterm signal produced by 'docker stop' command.
However further investigation into the EB machine running docker at:
/opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh
shows that when "flipping" from current to the new staged container, the old one is killed with 'docker kill'
Is there any way to change this behaviour to docker stop?
Or in general a recommended approach to handling graceful shutdown of the old container?
Thanks!
Self answering as I've found a solution that works for us:
tl;dr: use .ebextensions scripts to run your script before 01flip, your script will make sure a graceful shutdown of whatevers inside the docker takes place
first,
your app (or whatever your'e running in docker) has to be able to catch a signal, SIGINT for example, and shutdown gracefully upon it.
this is totally unrelated to Docker, you can test it running wherever (locally for example)
There is a lot of info about getting this kind of behaviour done for different kind of apps on the net (be it ruby, node.js etc...)
Second,
your EB/Docker based project can have a .ebextensions folder that holds all kinda of scripts to execute while deploying.
we put 2 custom scripts into it, gracefulshutdown_01.config and gracefulshutdown_02.config file that looks something like this:
# gracefulshutdown_01.config
commands:
backup-original-flip-hook:
command: cp -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak
test: '[ ! -f /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak ]'
cleanup-custom-hooks:
command: rm -f 05gracefulshutdown.sh
cwd: /opt/elasticbeanstalk/hooks/appdeploy/enact
ignoreErrors: true
and:
# gracefulshutdown_02.config
commands:
reorder-original-flip-hook:
command: mv /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/enact/10flip.sh
test: '[ -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh ]'
files:
"/opt/elasticbeanstalk/hooks/appdeploy/enact/05gracefulshutdown.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
# find currently running docker
EB_CONFIG_DOCKER_CURRENT_APP_FILE=$(/opt/elasticbeanstalk/bin/get-config container -k app_deploy_file)
EB_CONFIG_DOCKER_CURRENT_APP=""
if [ -f $EB_CONFIG_DOCKER_CURRENT_APP_FILE ]; then
EB_CONFIG_DOCKER_CURRENT_APP=`cat $EB_CONFIG_DOCKER_CURRENT_APP_FILE | cut -c 1-12`
echo "Graceful shutdown on app container: $EB_CONFIG_DOCKER_CURRENT_APP"
else
echo "NO CURRENT APP TO GRACEFUL SHUTDOWN FOUND"
exit 0
fi
# give graceful kill command to all running .js files (not stats!!)
docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | xargs docker exec $EB_CONFIG_DOCKER_CURRENT_APP kill -s SIGINT
echo "sent kill signals"
# wait (max 5 mins) until processes are done and terminate themselves
TRIES=100
until [ $TRIES -eq 0 ]; do
PIDS=`docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | cat`
echo TRIES $TRIES PIDS $PIDS
if [ -z "$PIDS" ]; then
echo "finished graceful shutdown of docker $EB_CONFIG_DOCKER_CURRENT_APP"
exit 0
else
let TRIES-=1
sleep 3
fi
done
echo "failed to graceful shutdown, please investigate manually"
exit 1
gracefulshutdown_01.config is a small util that backups the original flip01 and deletes (if exists) our custom script.
gracefulshutdown_02.config is where the magic happens.
it creates a 05gracefulshutdown enact script and makes sure flip will happen afterwards by renaming it to 10flip.
05gracefulshutdown, the custom script, does this basically:
find current running docker
find all processes that need to be sent a SIGINT (for us its processes with 'workers' in its name
send a sigint to the above processes
loop:
check if processes from before were killed
continue looping for an amount of tries
if tries are over, exit with status "1" and dont continue to 10flip, manual interference is needed.
this assumes you only have 1 docker running on the machine, and that you are able to manually hop on to check whats wrong in the case it fails (for us never happened yet).
I imagine it can also be improved in many ways, so have fun.