Using "kubectl delete pods X" from inside Kubernetes - kubernetes

We're using a >1.8 version of k8s on gcloud. Unfortunately EventStore stops pushing data until it is rebooted. Thus we'd like to run kubectl --namespace=$NAMESPACE delete pod eventstore-0 every 6 hours. Thus we have a cron job like:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: eventstore-restart
spec:
# Run every full hour, 15 past, 30 past, 45 past every other time-unit.
schedule: "0,15,30,45 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 5
jobTemplate:
spec:
template:
spec:
containers:
- name: eventstore-restart
image: eu.gcr.io/$PROJECT_ID/kubectl:latest
imagePullPolicy: Always
command: [ "/bin/sh", "-c" ]
args:
- 'set -x; kubectl --namespace=$NAMESPACE get pods
| grep -ho "eventstore-\d+"
| xargs -n 1 -I {} kubectl --namespace=$NAMESPACE delete pod {}'
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccount: restart-eventstore
However, this seems to expand to kubectl get pods ..., piped with | { ... }, which causes "/bin/sh: syntax error: unexpected end of file (expecting "}") to fail the script.
How do I write the command to delete a pod on a schedule?

I would do this:
kubectl delete po $(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep eventstore) -n $NAMESPACE
or (your way)
kubectl get pods -n $NAMESPACE -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep eventstore | xargs -n 1 -I {} kubectl delete po {}
Now, if you know you want to delete pod "eventstore-0", why to not do directly kubectl delete pod eventstore-0?

I suggest to use label selectors to filter results of kubectl get, and jsonpath output to get just the name of the pod.
Assuming that your pod is labeled with app=eventstore and that you want to delete every pod with this label, you could use the following command:
k get po --namespace=$NAMESPACE --selector app=eventstore -o jsonpath="{.items[*].metadata.name}" | xargs -n 1 -I {} kubectl --namespace=$NAMESPACE delete po {}
If you want to delete just the first pod, use jsonpath="{.items[0].metadata.name}"

#suren answer is good, but it won't work in all the cases when you want to delete multiple specific pods. For example, if you have: pod1, pod2, pod3, pod4, and you want to delete only pod2 and pod4, you can't do it with grep (grep pod will catch all of them).
In that case, you can delete them like that:
kubectl delete pods -n $NAMESPACE $(echo -e 'pod2\npod4')

Related

Edit kubernetes resource using kubectl run --command

I am trying to create a pod run a command edit an exist resource , but its not working
My CR is
apiVersion: feature-toggle.resource.api.sap/v1
kind: TestCR
metadata:
name: test
namespace: my-namespace
spec:
enabled: true
strategies:
- name: tesst
parameters:
perecetage: "10"
The command I am trying to run is
kubectl run kube-bitname --image=bitnami/kubectl:latest -n my-namespace --command -- kubectl get testcr test -n my-namespace -o json | jq '.spec.strategies[0].parameters.perecetage="66"' | kubectl apply -f -
But This not work ? any idea ?
It would be better if you post more info about the error o the trace that are you getting executing the command, but I have a question that could be a good insight about what is happening here.
Has the kubectl command that you are running inside the bitnami/kubectl:latest any context that allow it to connect to your cluster?
If you take a look into the kubectl docker hub documentation you can see that you should map a config file to the pod in order to connect to your own cluster.
$ docker run --rm --name kubectl -v /path/to/your/kube/config:/.kube/config bitnami/kubectl:latest

What are production uses for Kubernetes pods without an associated deployment?

I have seen the one-pod <-> one-container rule, which seems to apply to business logic pods, but has exceptions when it comes to shared network/volume related resources.
What are encountered production uses of deploying pods without a deployment configuration?
I use pods directly to start a Centos (or other operating system) container in which to verify connections or test command line options.
As a specific example, below is a shell script that starts an ubuntu container. You can easily modify the manifest to test secret access or change the service account to test access control.
#!/bin/bash
RANDOMIZER=$(uuid | cut -b-5)
POD_NAME="bash-shell-$RANDOMIZER"
IMAGE=ubuntu
NAMESPACE=$(uuid)
kubectl create namespace $NAMESPACE
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: $POD_NAME
namespace: $NAMESPACE
spec:
containers:
- name: $POD_NAME
image: $IMAGE
command: ["/bin/bash"]
args: ["-c", "while true; do date; sleep 5; done"]
hostNetwork: true
dnsPolicy: Default
restartPolicy: Never
EOF
echo "---------------------------------"
echo "| Press ^C when pod is running. |"
echo "---------------------------------"
kubectl -n $NAMESPACE get pod $POD_NAME -w
echo
kubectl -n $NAMESPACE exec -it $POD_NAME -- /bin/bash
kubectl -n $NAMESPACE delete pod $POD_NAME
kubectl delete namespace $NAMESPACE
In our case, we use stand alone pods for debugging purposes only.
Otherwise you want your configuration to be stateless and written in YAML files.
For instance, debugging the dns resolution: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
kubectl exec -i -t dnsutils -- nslookup kubernetes.default

Run a specific command in all pods

I was wondering if it is possible to run a specific command (Example: echo "foo") at a specific time in all existing pods (pods that are not in the default namespace are included). It would be like a cronJob, but the only difference is that I want to specify/deploy it in one place only. Is that even possible?
It is possible. Please find the steps I followed, hope it help you.
First, create a simple script to read pod's name, exec and execute the command.
import os, sys
import logging
from datetime import datetime
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
dt = datetime.now()
ts = dt.strftime("%d-%m-%Y-%H-%M-%S-%f")
pods = os.popen("kubectl get po --all-namespaces").readlines()
for pod in pods:
ns = pod.split()[0]
po = pod.split()[1]
try:
h = os.popen("kubectl -n %s exec -i %s sh -- hostname" %(ns, po)).read()
os.popen("kubectl -n %s exec -i %s sh -- touch /tmp/foo-%s.txt" %(ns, po, ts))
logging.debug("Executed on %s" %h)
except Exception as e:
logging.error(e)
Next, Dockerize the above script, build and push.
FROM python:3.8-alpine
ENV KUBECTL_VERSION=v1.18.0
WORKDIR /foo
ADD https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl .
RUN chmod +x kubectl &&\
mv kubectl /usr/local/bin
COPY foo.py .
CMD ["python", "foo.py"]
Later we'll use this image in CronJob. You can see I have installed kubectl in the Dockerfile to trigger the kubectl commands. But it is insufficient, we should add clusterole and clusterrolebinding to the service account which runs the CronJob.
I have created a ns foo and I bound foo's default service account to cluster role I created as shown below.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: foo
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: foo
subjects:
- kind: ServiceAccount
name: default
namespace: foo
roleRef:
kind: ClusterRole
name: foo
apiGroup: rbac.authorization.k8s.io
Now service account default of foo has permissions to get, list, exec to all the pods in the cluster.
Finally create a cronjob to run the task.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: foo
spec:
schedule: "15 9 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: foo
image: harik8/sof:62177831
imagePullPolicy: Always
restartPolicy: OnFailure
Login to the pods and check, it should have created file with timestamp at /tmp directory of each pod.
$ kubectl exec -it app-59666bb5bc-v6p2h sh
# ls -lah /tmp
-rw-r--r-- 1 root root 0 Jun 4 09:15 foo-04-06-2020-09-15-06-792614.txt
logs
error: cannot exec into a container in a completed pod; current phase is Failed
error: cannot exec into a container in a completed pod; current phase is Succeeded
DEBUG:root:Executed on foo-1591262100-798ng
DEBUG:root:Executed on grafana-5f6f8cbf75-jtksp
DEBUG:root:Executed on istio-egressgateway-557dcf8d8-npfnd
DEBUG:root:Executed on istio-ingressgateway-6489d9556d-2dp7j
command terminated with exit code 126
DEBUG:root:Executed on OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"hostname\": executable file not found in $PATH": unknown
DEBUG:root:Executed on istiod-774777b79-mvmqm
It is possible but a bit complicated and you would need to write everything yourself, as there is no automatic tools to do that as far as I'm aware.
You could use Kubernetes API to collect all pod names, use those in a loop to push kubectl exec pod_name command to all those pods.
To list all pods in a cluster GET /api/v1/pods, this will also list the system ones.
This script could be run using Kubernetes CronJob at your specified time.
There you go:
for ns in $(kubectl get ns -oname | awk -F "/" '{print $2}'); do for pod in $(kubectl get po -n $ns -oname | awk -F "/" '{print $2}'); do kubectl exec $pod -n $ns echo foo; done; done
It will return en error if echo (or the command) is not available in the container. Other then that, should work.

kubernetes cron job which should run every 10mins and should delete the pods which are in "Terminating" state in all the namespaces in the cluster?

kubernetes cron job which should run every 10mins and should delete the pods
which are in "Terminating" state in all the namespaces in the cluster? please
help me out....am struggling with the bash one liner shell script
apiVersion: batch/v1
kind: Job
metadata:
name: process-item-$ITEM
labels:
jobgroup: jobexample
spec:
template:
metadata:
name: jobexample
labels:
jobgroup: jobexample
spec:
containers:
- name: c
image: busybox
command: ["sh", "-c", "echo Processing item $ITEM && sleep 5"]
restartPolicy: Never
List all terminating pods in all namespace with the format {namespace}.{name}
kubectl get pods --field-selector=status.phase=Terminating --output=jsonpath='{range .items[*]}{.metadata.namespace}{"."}{.metadata.name}{"\n"}{end}' --all-namespaces=true
Given a pod's name and its namespace, it can be force deleted by
kubectl delete pods <pod> --grace-period=0 --force --ns=<namespace>
In one line
for i in `kubectl get pods --field-selector=status.phase=Terminating --output=jsonpath='{range .items[*]}{.metadata.namespace}{"."}{.metadata.name}{"\n"}{end}' --all-namespaces=true`; do kubectl delete pods ${i##*.} --grace-period=0 --force --ns=${i%%.*}; done

Kubectl command to list pods of a deployment in Kubernetes

Is there a way to use kubectl to list only the pods belonging to a deployment?
Currently, I do this to get pods:
kubectl get pods| grep hello
But it seems an overkill to get ALL the pods when I am interested to know only the pods for a given deployment. I use the output of this command to see the status of all pods, and then possibly exec into one of them.
I also tried kc get -o wide deployments hellodeployment, but it does not print the Pod names.
There's a label in the pod for the selector in the deployment. That's how a deployment manages its pods. For example for the label or selector app=http-svc you can do something like that this and avoid using grep and listing all the pods (this becomes useful as your number of pods becomes very large)
here are some examples command line:
# single label
kubectl get pods -l=app=http-svc
kubectl get pods --selector=app=http-svc
# multiple labels
kubectl get pods --selector key1=value1,key2=value2
K8s components are linked to each other by labels and selectors. There are just no built-in attributes of My-List-of-ReplicaSets or My-List-Of-Pods for a deployment. You can't get them from kubectl describe or kubectl get
As #Rico suggested above, you have to use label filters. But you can't simply use the labels that you specify in the deployment metafile because deployment will generate a random hash and use it as an additional label.
For example, I have a deployment and a standalone pod that share the same label app=http-svc. While the first two are managed by the deployment, the 3rd one is not and shouldn't be in the result.
ma.chi#~/k8s/deployments % kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
http-9c89b5578-6cqbp 1/1 Running 0 7s app=http-svc,pod-template-hash=574561134
http-9c89b5578-vwqbx 1/1 Running 0 7s app=http-svc,pod-template-hash=574561134
nginx-standalone 1/1 Running 0 7s app=http-svc
ma.chi#~/k8s/deployments %
The source file is
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: http-svc
name: http
spec:
replicas: 2
selector:
matchLabels:
app: http-svc
strategy: {}
template:
metadata:
labels:
app: http-svc
spec:
containers:
- image: nginx:1.9.1
name: nginx1
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: http-svc
name: nginx-standalone
spec:
containers:
- image: nginx:1.9.1
name: nginx1-standalone
To exact spot the containers created and managed by your deployment, you can use the script below(which is ugly, but this is the best I can do)
DEPLOY_NAME=http
RS_NAME=`kubectl describe deployment $DEPLOY_NAME|grep "^NewReplicaSet"|awk '{print $2}'`; echo $RS_NAME
POD_HASH_LABEL=`kubectl get rs $RS_NAME -o jsonpath="{.metadata.labels.pod-template-hash}"` ; echo $POD_HASH_LABEL
POD_NAMES=`kubectl get pods -l pod-template-hash=$POD_HASH_LABEL --show-labels | tail -n +2 | awk '{print $1}'`; echo $POD_NAMES
Here's a tidier shell alias (based on this code by #kekaoyunfuwu) that only lists the pods of a deployment (no interim results are shown):
k_list_pods_in_deployment() (
test $# -eq 0 && {
echo "Missing deployment name" && kubectl get deployments
return 1
}
deployment="$1"; shift
replicaSet="$(kubectl describe deployment $deployment \
| grep '^NewReplicaSet' \
| awk '{print $2}'
)"
podHashLabel="$(kubectl get rs $replicaSet \
-o jsonpath='{.metadata.labels.pod-template-hash}'
)"
kubectl get pods -l pod-template-hash=$podHashLabel --show-labels \
| tail -n +2 | awk '{print $1}'
)
alias k.list-pods-in-deployment=k_list_pods_in_deployment