I am trying to create a pod run a command edit an exist resource , but its not working
My CR is
apiVersion: feature-toggle.resource.api.sap/v1
kind: TestCR
metadata:
name: test
namespace: my-namespace
spec:
enabled: true
strategies:
- name: tesst
parameters:
perecetage: "10"
The command I am trying to run is
kubectl run kube-bitname --image=bitnami/kubectl:latest -n my-namespace --command -- kubectl get testcr test -n my-namespace -o json | jq '.spec.strategies[0].parameters.perecetage="66"' | kubectl apply -f -
But This not work ? any idea ?
It would be better if you post more info about the error o the trace that are you getting executing the command, but I have a question that could be a good insight about what is happening here.
Has the kubectl command that you are running inside the bitnami/kubectl:latest any context that allow it to connect to your cluster?
If you take a look into the kubectl docker hub documentation you can see that you should map a config file to the pod in order to connect to your own cluster.
$ docker run --rm --name kubectl -v /path/to/your/kube/config:/.kube/config bitnami/kubectl:latest
I have a project to create a mutating webhook in the kube-system namespace, which needs to exclude webhook server deployment namespaces.
But the kube-system namespace has been created. How do I attach the required labels to it using Helm?
Helmfile offers hooks which are pretty neat for that:
releases:
- name: istio-ingress
namespace: istio-ingress
chart: istio/gateway
wait: true
hooks:
- events:
- presync
showlogs: true
command: sh
args:
- -c
- "kubectl create namespace istio-ingress --dry-run=client -o yaml | kubectl apply -f -"
- events:
- presync
showlogs: true
command: sh
args:
- -c
- "kubectl label --dry-run=client -o yaml --overwrite namespace istio-ingress istio-injection=enabled | kubectl apply -f -"
Since the kube-system namespace is a core part of Kubernetes (every cluster has it preinstalled and some core components run there) Helm can't manage it.
Some possible things you could do instead:
Make the per-namespace labels opt-in, not opt-out; only apply the webhook in namespaces where the label is present, rather than in every namespace except flagged ones. (Istio's sidecar injector works this way.)
Exclude kube-system as a special case in the code.
Manually run kubectl label namespace outside of Helm.
Make your larger-scale deployment pipeline run the kubectl command (for example, if you have a Jenkins build that installs the webhook, also make it set the label).
I am trying to set up Argo CD on Google Kubernetes Engine Autopilot and each pod/container is defaulting to the default resource request (0.5 vCPU and 2 GB RAM per container). This is way more than the pods need and is going to be too expensive (13GB of memory reserved in my cluster just for Argo CD). I am following the Getting Started guide for Argo CD and am running the following command to add Argo CD to my cluster:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
How do I specify the resources for each pod when I am using someone else's yaml template? The only way I have found to set resource requests is with my own yaml file like this:
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
But I don't understand how to apply this type of configuration to Argo CD.
Thanks!
So right now you are just using kubectl with the manifest from github and you cannot edit it. What you need to do is
1 Download the file with wget
https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
2 Use an editor like nano or vim to edit the file with requests as
explained in my comments above.
3 Then use kubectl apply -f newfile.yaml
You can dump the yaml of argocd, then customize your resource request, and then apply the modified yaml.
$ kubectl get deployment -n argocd -o yaml > argocd_deployment.yaml
$ kubectl get sts -n argocd -o yaml > argocd_statefulset.yaml
$ # modify resource
$ vim argocd_deployment.yaml
$ vim argocd_statefulset.yaml
$ kubectl apply -f argocd_deployment.yaml
$ kubectl apply -f argocd_statefulset.yaml
Or modify deplopyment and statefulset directly by kubectl edit
$ kubectl edit deployment -n argocd
$ kubectl edit sts -n argocd
I have seen the one-pod <-> one-container rule, which seems to apply to business logic pods, but has exceptions when it comes to shared network/volume related resources.
What are encountered production uses of deploying pods without a deployment configuration?
I use pods directly to start a Centos (or other operating system) container in which to verify connections or test command line options.
As a specific example, below is a shell script that starts an ubuntu container. You can easily modify the manifest to test secret access or change the service account to test access control.
#!/bin/bash
RANDOMIZER=$(uuid | cut -b-5)
POD_NAME="bash-shell-$RANDOMIZER"
IMAGE=ubuntu
NAMESPACE=$(uuid)
kubectl create namespace $NAMESPACE
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: $POD_NAME
namespace: $NAMESPACE
spec:
containers:
- name: $POD_NAME
image: $IMAGE
command: ["/bin/bash"]
args: ["-c", "while true; do date; sleep 5; done"]
hostNetwork: true
dnsPolicy: Default
restartPolicy: Never
EOF
echo "---------------------------------"
echo "| Press ^C when pod is running. |"
echo "---------------------------------"
kubectl -n $NAMESPACE get pod $POD_NAME -w
echo
kubectl -n $NAMESPACE exec -it $POD_NAME -- /bin/bash
kubectl -n $NAMESPACE delete pod $POD_NAME
kubectl delete namespace $NAMESPACE
In our case, we use stand alone pods for debugging purposes only.
Otherwise you want your configuration to be stateless and written in YAML files.
For instance, debugging the dns resolution: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
kubectl exec -i -t dnsutils -- nslookup kubernetes.default
I'm playing around with GitOps and ArgoCD in Redhat Openshift. My goal is to switch a worker node to an infra node.
I want to do this with descriptive YAML Files, and NOT manually by using the command line (that's easy with kubectl label node ...)
In order to do make the node an infra node, I want to add a label "infra" and take the label "worker" from it. Before, the object looks like this (irrelevant labels omitted):
apiVersion: v1
kind: Node
metadata:
labels:
node-role.kubernetes.io/infra: ""
name: node6.example.com
spec: {}
After applying a YAML File, it's supposed to look like that:
apiVersion: v1
kind: Node
metadata:
labels:
node-role.kubernetes.io/worker: ""
name: node6.example.com
spec: {}
If I put the latter config in a file, and do "kubectl apply -f ", the node has both infra and worker labels. So adding a label or changing the value of a label is easy, but is there a way to remove a label in an objects metadata by applying a YAML file ?
you can delete the label with
kubectl label node node6.example.com node-role.kubernetes.io/infra-
than you can run the kubectl apply again with the new label.
You will be up and running.
I would say it's not possible to do with kubectl apply, at least I tried and couldn't find any informations about that.
As #Petr Kotas mentioned you can always use
kubectl label node node6.example.com node-role.kubernetes.io/infra-
But I see you're looking for something else
I want to do this with descriptive YAML Files, and NOT manually by using the command line (that's easy with kubectl label node ...)
So maybe the answer could be to use API clients, for example python? I have found this example here, made by #Prafull Ladha
As already mentioned, correct kubectl example to delete label, but there is no mention of removing labels using API clients. if you want to remove label using the API, then you need to provide a new body with the labelname: None and then patch that body to the node or pod. I am using the kubernetes python client API for example purpose
from pprint import pprint
from kubernetes import client, config
config.load_kube_config()
client.configuration.debug = True
api_instance = client.CoreV1Api()
body = {
"metadata": {
"labels": {
"label-name": None}
}
}
api_response = api_instance.patch_node("minikube", body)
print(api_response)
Try setting the worker label to false:
node-role.kubernetes.io/worker: "false"
Worked for me on OpenShift 4.4.
Edit:
This doesn't work. What happened was:
Applied YML file containing node-role.kubernetes.io/worker: "false"
Automated process ran deleting the node-role.kubernetes.io/worker label from the node (due to it not being specified in the YML it would automatically apply)
What's funny is that the automated process would not delete the label if it was empty instead of set to false.
I've pretty successfully changed a node label in my Kubernetes cluster (created using kubeadm) using kubectl replace and kubectl apply.
Required: If your node configuration was changed manually using imperative command like kubectl label it's required to fix last-applied-configuration annotation using the following command (replace node2 with your node name):
kubectl get node node2 -o yaml | kubectl apply -f -
Note: It works in the same way with all types of Kubernetes objects (with slightly different consequences. Always check the results).
Note2: --export argument for kubectl get is deprecated, and it works well without it, but if you use it the last-applied-configuration annotation appears to be much shorter and easier to read.
Without applying existing configuration, the next kubectl apply command will ignore all fields that are not present in the last-applied-configuration annotation.
The following example illustrate that behavior:
kubectl get node node2 -o yaml | grep node-role
{"apiVersion":"v1","kind":"Node","metadata":{"annotations":{"flannel.alpha.coreos.com/backend-data":"{\"VtepMAC\":\"46:c6:d1:f0:6c:0a\"}","flannel.alpha.coreos.com/backend-type":"vxlan","flannel.alpha.coreos.com/kube-subnet-manager":"true","flannel.alpha.coreos.com/public-ip":"10.156.0.11","kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"creationTimestamp":null,
"labels":{
"beta.kubernetes.io/arch":"amd64",
"beta.kubernetes.io/os":"linux",
"kubernetes.io/arch":"amd64",
"kubernetes.io/hostname":"node2",
"kubernetes.io/os":"linux",
"node-role.kubernetes.io/worker":""}, # <--- important line: only worker label is present
"name":"node2","selfLink":"/api/v1/nodes/node2"},"spec":{"podCIDR":"10.244.2.0/24"},"status":{"daemonEndpoints":{"kubeletEndpoint":{"Port":0}},"nodeInfo":{"architecture":"","bootID":"","containerRuntimeVersion":"","kernelVersion":"","kubeProxyVersion":"","kubeletVersion":"","machineID":"","operatingSystem":"","osImage":"","systemUUID":""}}}
node-role.kubernetes.io/santa: ""
node-role.kubernetes.io/worker: ""
Let's check what happened with node-role.kubernetes.io/santa label if I try to replace the worker with infra and remove santa, ( worker is present in the annotation):
# kubectl diff is used to comare the current online configuration, and the configuration as it would be if applied
kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/worker: ""#node-role.kubernetes.io/infra: ""#' | sed 's#node-role.kubernetes.io/santa: ""##'| kubectl diff -f -
diff -u -N /tmp/LIVE-380689040/v1.Node..node2 /tmp/MERGED-682760879/v1.Node..node2
--- /tmp/LIVE-380689040/v1.Node..node2 2020-04-08 17:20:18.108809972 +0000
+++ /tmp/MERGED-682760879/v1.Node..node2 2020-04-08 17:20:18.120809972 +0000
## -18,8 +18,8 ##
kubernetes.io/arch: amd64
kubernetes.io/hostname: node2
kubernetes.io/os: linux
+ node-role.kubernetes.io/infra: "" # <-- created as desired
node-role.kubernetes.io/santa: "" # <-- ignored, because the label isn't present in the last-applied-configuration annotation
- node-role.kubernetes.io/worker: "" # <-- removed as desired
name: node2
resourceVersion: "60973814"
selfLink: /api/v1/nodes/node2
exit status 1
After fixing annotation (by running kubectl get node node2 -o yaml | kubectl apply -f - ), kubectl apply works pretty well replacing and removing labels:
kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/worker: ""#node-role.kubernetes.io/infra: ""#' | sed 's#node-role.kubernetes.io/santa: ""##'| kubectl diff -f -
diff -u -N /tmp/LIVE-107488917/v1.Node..node2 /tmp/MERGED-924858096/v1.Node..node2
--- /tmp/LIVE-107488917/v1.Node..node2 2020-04-08 18:01:55.776699954 +0000
+++ /tmp/MERGED-924858096/v1.Node..node2 2020-04-08 18:01:55.792699954 +0000
## -18,8 +18,7 ##
kubernetes.io/arch: amd64
kubernetes.io/hostname: node2
kubernetes.io/os: linux
- node-role.kubernetes.io/santa: "" # <-- removed as desired
- node-role.kubernetes.io/worker: "" # <-- removed as desired, literally replaced with the following label
+ node-role.kubernetes.io/infra: "" # <-- created as desired
name: node2
resourceVersion: "60978298"
selfLink: /api/v1/nodes/node2
exit status 1
Here are a few more examples:
# Check the original label ( last filter removes last applied config annotation line)
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/infra: ""
# Replace the label "infra" with "worker" using kubectl replace syntax
$ kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/infra: ""#node-role.kubernetes.io/worker: ""#' | kubectl replace -f -
node/node2 replaced
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/worker: ""
# label replaced -------^^^^^^
# Replace the label "worker" back to "infra" using kubectl apply syntax
$ kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/worker: ""#node-role.kubernetes.io/infra: ""#' | kubectl apply -f -
node/node2 configured
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
node-role.kubernetes.io/infra: ""
# label replaced -------^^^^^
# Remove the label from the node ( for demonstration purpose)
$ kubectl get node node2 -o yaml | sed 's#node-role.kubernetes.io/infra: ""##' | kubectl apply -f -
node/node2 configured
# check the new state of the label
$ kubectl get node node2 -o yaml | grep node-role | grep -v apiVersion
# empty output
# label "infra" has been removed
You may see the following warning when you use kubectl apply -f on the resource created using imperative commands like kubectl create or kubectl expose for the first time:
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
In this case last-applied-configuration annotation will be created with the content of the file used in kubectl apply -f filename.yaml command. It may not contain all parameters and labels that are present in the live object.