Edit kubernetes resource using kubectl run --command - kubernetes

I am trying to create a pod run a command edit an exist resource , but its not working
My CR is
apiVersion: feature-toggle.resource.api.sap/v1
kind: TestCR
metadata:
name: test
namespace: my-namespace
spec:
enabled: true
strategies:
- name: tesst
parameters:
perecetage: "10"
The command I am trying to run is
kubectl run kube-bitname --image=bitnami/kubectl:latest -n my-namespace --command -- kubectl get testcr test -n my-namespace -o json | jq '.spec.strategies[0].parameters.perecetage="66"' | kubectl apply -f -
But This not work ? any idea ?

It would be better if you post more info about the error o the trace that are you getting executing the command, but I have a question that could be a good insight about what is happening here.
Has the kubectl command that you are running inside the bitnami/kubectl:latest any context that allow it to connect to your cluster?
If you take a look into the kubectl docker hub documentation you can see that you should map a config file to the pod in order to connect to your own cluster.
$ docker run --rm --name kubectl -v /path/to/your/kube/config:/.kube/config bitnami/kubectl:latest

Related

How do you specific GKE resource requests for Argo CD?

I am trying to set up Argo CD on Google Kubernetes Engine Autopilot and each pod/container is defaulting to the default resource request (0.5 vCPU and 2 GB RAM per container). This is way more than the pods need and is going to be too expensive (13GB of memory reserved in my cluster just for Argo CD). I am following the Getting Started guide for Argo CD and am running the following command to add Argo CD to my cluster:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
How do I specify the resources for each pod when I am using someone else's yaml template? The only way I have found to set resource requests is with my own yaml file like this:
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
But I don't understand how to apply this type of configuration to Argo CD.
Thanks!
So right now you are just using kubectl with the manifest from github and you cannot edit it. What you need to do is
1 Download the file with wget
https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
2 Use an editor like nano or vim to edit the file with requests as
explained in my comments above.
3 Then use kubectl apply -f newfile.yaml
You can dump the yaml of argocd, then customize your resource request, and then apply the modified yaml.
$ kubectl get deployment -n argocd -o yaml > argocd_deployment.yaml
$ kubectl get sts -n argocd -o yaml > argocd_statefulset.yaml
$ # modify resource
$ vim argocd_deployment.yaml
$ vim argocd_statefulset.yaml
$ kubectl apply -f argocd_deployment.yaml
$ kubectl apply -f argocd_statefulset.yaml
Or modify deplopyment and statefulset directly by kubectl edit
$ kubectl edit deployment -n argocd
$ kubectl edit sts -n argocd

How to create a kubernetes service yaml file without --dry-run

It seems that --dry-run flag is not available for service.
kubectl create service --
--add-dir-header --log-backtrace-at --server
--alsologtostderr --log-dir --skip-headers
--as --log-file --skip-log-headers
--as-group --log-file-max-size --stderrthreshold
--cache-dir --log-flush-frequency --tls-server-name
--certificate-authority --logtostderr --token
--client-certificate --match-server-version --user
--client-key --namespace --username
--cluster --password --v
--context --profile --vmodule
--insecure-skip-tls-verify --profile-output --warnings-as-errors
--kubeconfig --request-timeout
Is there a way to create a service yaml file without --dry-run=client option. I tried with the below command and getting an error.
kubectl create service ns-service nodeport --dry-run=client -o yaml >nodeport.yaml
Error: unknown flag: --dry-run
See 'kubectl create service --help' for usage.
There are two ways to do this.
=================================================================
First Way:- using kubectl create service
What wrong you are doing here is you are giving service name befor the service type in command that's why its failing.
correct way is
Syntax :
kubectl create service clusterip NAME [--tcp=<port>:<targetPort>] [--dry-run=server|client|none] [options]
Example :
kubectl create service nodeport ns-service --tcp=80:80 --dry-run=client -o yaml
=================================================================
Second way:-
Here you can use kubectl expose command to create a service file.
Let's assume you have a pod running with the name nginx. and you want to create a service for nginx pod.
then I will write below command to generate the service file.
Synatax:
kubectl expose [pod/deployment/replicaset] [name-of-pod/deployment/replicaset] --port=80 --target-port=8000 --dry-run=client -o yaml
Example:
kubectl expose pod nginx --port=80 --target-port=8000 --dry-run=client -o yaml
output:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8000
selector:
run: nginx
status:
loadBalancer: {}

What are production uses for Kubernetes pods without an associated deployment?

I have seen the one-pod <-> one-container rule, which seems to apply to business logic pods, but has exceptions when it comes to shared network/volume related resources.
What are encountered production uses of deploying pods without a deployment configuration?
I use pods directly to start a Centos (or other operating system) container in which to verify connections or test command line options.
As a specific example, below is a shell script that starts an ubuntu container. You can easily modify the manifest to test secret access or change the service account to test access control.
#!/bin/bash
RANDOMIZER=$(uuid | cut -b-5)
POD_NAME="bash-shell-$RANDOMIZER"
IMAGE=ubuntu
NAMESPACE=$(uuid)
kubectl create namespace $NAMESPACE
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: $POD_NAME
namespace: $NAMESPACE
spec:
containers:
- name: $POD_NAME
image: $IMAGE
command: ["/bin/bash"]
args: ["-c", "while true; do date; sleep 5; done"]
hostNetwork: true
dnsPolicy: Default
restartPolicy: Never
EOF
echo "---------------------------------"
echo "| Press ^C when pod is running. |"
echo "---------------------------------"
kubectl -n $NAMESPACE get pod $POD_NAME -w
echo
kubectl -n $NAMESPACE exec -it $POD_NAME -- /bin/bash
kubectl -n $NAMESPACE delete pod $POD_NAME
kubectl delete namespace $NAMESPACE
In our case, we use stand alone pods for debugging purposes only.
Otherwise you want your configuration to be stateless and written in YAML files.
For instance, debugging the dns resolution: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
kubectl exec -i -t dnsutils -- nslookup kubernetes.default

Run a specific command in all pods

I was wondering if it is possible to run a specific command (Example: echo "foo") at a specific time in all existing pods (pods that are not in the default namespace are included). It would be like a cronJob, but the only difference is that I want to specify/deploy it in one place only. Is that even possible?
It is possible. Please find the steps I followed, hope it help you.
First, create a simple script to read pod's name, exec and execute the command.
import os, sys
import logging
from datetime import datetime
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
dt = datetime.now()
ts = dt.strftime("%d-%m-%Y-%H-%M-%S-%f")
pods = os.popen("kubectl get po --all-namespaces").readlines()
for pod in pods:
ns = pod.split()[0]
po = pod.split()[1]
try:
h = os.popen("kubectl -n %s exec -i %s sh -- hostname" %(ns, po)).read()
os.popen("kubectl -n %s exec -i %s sh -- touch /tmp/foo-%s.txt" %(ns, po, ts))
logging.debug("Executed on %s" %h)
except Exception as e:
logging.error(e)
Next, Dockerize the above script, build and push.
FROM python:3.8-alpine
ENV KUBECTL_VERSION=v1.18.0
WORKDIR /foo
ADD https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl .
RUN chmod +x kubectl &&\
mv kubectl /usr/local/bin
COPY foo.py .
CMD ["python", "foo.py"]
Later we'll use this image in CronJob. You can see I have installed kubectl in the Dockerfile to trigger the kubectl commands. But it is insufficient, we should add clusterole and clusterrolebinding to the service account which runs the CronJob.
I have created a ns foo and I bound foo's default service account to cluster role I created as shown below.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: foo
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: foo
subjects:
- kind: ServiceAccount
name: default
namespace: foo
roleRef:
kind: ClusterRole
name: foo
apiGroup: rbac.authorization.k8s.io
Now service account default of foo has permissions to get, list, exec to all the pods in the cluster.
Finally create a cronjob to run the task.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: foo
spec:
schedule: "15 9 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: foo
image: harik8/sof:62177831
imagePullPolicy: Always
restartPolicy: OnFailure
Login to the pods and check, it should have created file with timestamp at /tmp directory of each pod.
$ kubectl exec -it app-59666bb5bc-v6p2h sh
# ls -lah /tmp
-rw-r--r-- 1 root root 0 Jun 4 09:15 foo-04-06-2020-09-15-06-792614.txt
logs
error: cannot exec into a container in a completed pod; current phase is Failed
error: cannot exec into a container in a completed pod; current phase is Succeeded
DEBUG:root:Executed on foo-1591262100-798ng
DEBUG:root:Executed on grafana-5f6f8cbf75-jtksp
DEBUG:root:Executed on istio-egressgateway-557dcf8d8-npfnd
DEBUG:root:Executed on istio-ingressgateway-6489d9556d-2dp7j
command terminated with exit code 126
DEBUG:root:Executed on OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"hostname\": executable file not found in $PATH": unknown
DEBUG:root:Executed on istiod-774777b79-mvmqm
It is possible but a bit complicated and you would need to write everything yourself, as there is no automatic tools to do that as far as I'm aware.
You could use Kubernetes API to collect all pod names, use those in a loop to push kubectl exec pod_name command to all those pods.
To list all pods in a cluster GET /api/v1/pods, this will also list the system ones.
This script could be run using Kubernetes CronJob at your specified time.
There you go:
for ns in $(kubectl get ns -oname | awk -F "/" '{print $2}'); do for pod in $(kubectl get po -n $ns -oname | awk -F "/" '{print $2}'); do kubectl exec $pod -n $ns echo foo; done; done
It will return en error if echo (or the command) is not available in the container. Other then that, should work.

How to pass namespace in Kubernetes create deployment command

I am trying to define the namespace name when executing the kubectl create deployment command?
This is what I tried:
kubectl create deployment test --image=banu/image1 namespace=test
and this doesn't work.
And I want to expose this deployment using a ClusterIP service within the cluster itself for that given namespace How can I do that using kubectl command line?
You can specify either -n or --namespace options.
kubectl create deployment test --image=nginx --namespace default --dry-run -o yaml and see result deployment yaml.
Using kubectl run
kubectl run test --namespace test --image nginx --port 9090 --dry-run -o yaml
You need to create a namespace like this
kubectl create ns test
ns stands for namespace, so with kubectl you say you want to create namespace with name test
Then while you creating the deployment you add the namespace you want
kubectl create deployment test --image=banu/image1 -n test
Where flag -n stands for namespace, that way you say to Kubernetes that all resources related to that deployment will be under the test namespace
In order to see all the resources under a specific namespace
kubectl get all -n test
--namespace and -n is the same things
Use -n test instead of namespace=test
Sample with nginx image:
$ kubectl create deployment nginx --image=nginx -n test
deployment.apps/nginx created
$ kubectl get deploy -n test
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 8s
In second case you need to create service and define labels from deployment.
You can find correct labels by runnig something like:
kubectl -n test describe deploy test |grep Labels:
and apply service like:
apiVersion: v1
kind: Service
metadata:
name: test-svc
namespace: test
spec:
ports:
- name: test
port: 80 # Change this port
protocol: TCP
type: ClusterIP
selector:
# Here you need to define output from previous step