Kubernetes - Scaling the resource failed with: Job.batch is invalid: - kubernetes

I'm trying to delete an existent job using
kubectl delete job/job-name -n my-namespace
But this error is displayed
caling the resource failed with: Job.batch "kong-loop" is invalid:
spec.template: Invalid value: api.PodTemplateSpec{...}: field is
immutable; Current resource version 12189833

The solution posted by #esnible does work in this scenario, but it is simpler do these steps:
Delete job with cascade false
kubectl delete job/jobname -n namespace --cascade=false
Delete any pod that exists
kubectl delete pod/podname -n namespace
Solution found at in this google groups discussion https://groups.google.com/forum/#!topic/kubernetes-users/YVmUgktoqtI

kubectl does an HTTP PUT to the job during the delete process. This PUT fails because the Job has gotten itself into an invalid state. We must DELETE without PUTing.
Try
kubectl proxy
curl -X DELETE localhost:8001/apis/batch/v1/namespaces/<namespace>/jobs/<jobname>
Then kill the kubectl proxy process. namespace is typically default

Related

Reload pod when secret changes

Let's say that I have a running pod named my-pod
my-pod reads the secrets from foobar-secrets
Now let's say that I update some value in foobar-secrets
kubectl patch secret foobar-secrets --namespace kube-system --context=cluster-1 --patch "{\"data\": {\"FOOBAR\": \"$FOOBAR_BASE64\"}}"
What I should do to restart/reload the pod in order to get the new value?
https://github.com/stakater/Reloader is the usual solution for a fully standalone setup. Another option is https://github.com/jimmidyson/configmap-reload or similar but that requires coordination with the daemon process to have an API of some kind for reloading.

How do I undo a kubectl create deploy?

I was setting up a nginx cluster on google cloud, and I entered a wrong image name; instead of entering:
kubectl create deploy nginx --image=nginx:1.17.10
I entered:
kubectl create deploy nginx --image=1.17.10
and eventually after running kubectl get pods, It showed ImagePullBackOff as the status for the pod.
When I tried running the correct create deploy command above, It said "nginx" already exists.
When I tried doing kubernetes delete --all pods, the pod was recreated with a new ID but still had the same status, and still couldn't allow me to run the right 'kubectl create deploy' command above. Now I'm stuck.
How can I undo it?
You need to delete the deployment:
kubectl delete deploy nginx
Otherwise Kubernetes will recreate the pod on every shutdown.
You can see all your deployments with
kubectl get deploy
Edit the deployment via kubectl edit deployment DEPLOYMENT_NAME and change the image name.
Or
Edit the manifest file and append the file with a correct image mane and do a kubectl apply -f YAML file
First of all, your k8s cluster is trying to pull image 1.17.10 from public docker registry. But as there are no image exists with this name that's why it's get error. And when you have tried to delete your pods it will again try to create with same image name as your deployment is exists. For this reason you need to delete deployment rather then pods. Otherwise, deployment will automatically try to create deleted pod again.
you can actually check what was the error in your deployment with this command:
kubectl describe deploy nginx
For you the command will bekubectl delete deploy -n <Namespace_name> <deployment_name>. As you have created your deployment in default namespace you don't need to mention the namespace automatically it will be the default namespace.
you can delete deployment with this command:
kubectl delete deploy nginx

Kubernetes livenessProbe/readinessProbe deploy problem

I tried to apply my pod with livenessProbe and readinessProbe.
The problem is that i get a error: after apply: kubectl apply -f test1.yaml
Error:
The Pod "test1" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)....
Check if you have a pod named test1 already running, when you apply your yaml, Kubernetes think you want to modify the pod already running, and this action only permits changes to specific fields like the message indicates.
To check if there is a pod running check with this command.
kubectl get pod test1
then delete the pod and apply your yaml.
kubectl delete pod test1
kubectl apply -f xxxx

kubectl delete all resources except the kubernetes service

Is there a variant of kubectl delete all --all command or some other command to delete all resources except the kubernetes service?
I don't think there's a built-in command for it, which means you'll have to script your way out of it, something like this (add an if for the namespace you want to spare):
$ for ns in $(kubectl get ns --output=jsonpath={.items[*].metadata.name}); do kubectl delete ns/$ns; done;
Note: deleting a namespace deletes all its resources.

How to delete all resources from Kubernetes one time?

Include:
Daemon Sets
Deployments
Jobs
Pods
Replica Sets
Replication Controllers
Stateful Sets
Services
...
If has replicationcontroller, when delete some deployments they will regenerate. Is there a way to make kubenetes back to initialize status?
Method 1: To delete everything from the current namespace (which is normally the default namespace) using kubectl delete:
kubectl delete all --all
all refers to all resource types such as pods, deployments, services, etc. --all is used to delete every object of that resource type instead of specifying it using its name or label.
To delete everything from a certain namespace you use the -n flag:
kubectl delete all --all -n {namespace}
Method 2: You can also delete a namespace and re-create it. This will delete everything that belongs to it:
kubectl delete namespace {namespace}
kubectl create namespace {namespace}
Note (thanks #Marcus): all in kubernetes does not refers to every kubernetes object, such as admin level resources (limits, quota, policy, authorization rules). If you really want to make sure to delete eveything, it's better to delete the namespace and re-create it. Another way to do that is to use kubectl api-resources to get all resource types, as seen here:
kubectl delete "$(kubectl api-resources --namespaced=true --verbs=delete -o name | tr "\n" "," | sed -e 's/,$//')" --all
Kubernetes Namespace would be the perfect options for you. You can easily create namespace resource.
kubectl create -f custom-namespace.yaml
$ apiVersion: v1
kind: Namespace
metadata:
name:custom-namespace
Now you can deploy all of the other resources(Deployment,ReplicaSet,Services etc) in that custom namespaces.
If you want to delete all of these resources, you just need to delete custom namespace. by deleting custom namespace, all of the other resources would be deleted. Without it, ReplicaSet might create new pods when existing pods are deleted.
To work with Namespace, you need to add --namespace flag to k8s commands.
For example:
kubectl create -f deployment.yaml --namespace=custom-namespace
you can list all the pods in custom-namespace.
kubectl get pods --namespace=custom-namespace
You can also delete Kubernetes resources with the help of labels attached to it. For example, suppose below label is attached to all resource
metadata:
name: label-demo
labels:
env: dev
app: nginx
now just execute the below commands
deleting resources using app label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l app=nginx
deleting resources using envirnoment label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l env=dev
can also try kubectl delete all --all --all-namespaces
all refers to all resources
--all refers to all resources, including uninitialized ones
--all-namespaces in all all namespaces
First backup your namespace resources and then delete all resources found with the get all command:
kubectl get all --namespace={your-namespace} -o yaml > {your-namespace}.yaml
kubectl delete -f {your-namespace}.yaml
Nevertheless, still some resources exists in your cluster.
Check with
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found --namespace {your-namespace}
If you really want to COMPLETELY delete your namespace, go ahead with:
kubectl delete namespace {your-namespace}
(tested with Client v1.23.1 and Server v1.22.3)
In case if you want to delete all K8S resources in the cluster. Then, easiest way would be to delete the entire namespace.
kubectl delete ns <name-space>
kubectl delete deploy,service,job,statefulset,pdb,networkpolicy,prometheusrule,cm,secret,ds -n namespace -l label
kubectl delete all --all
to delete all the resource in cluster.
after deleting all resources k8's will again relaunch the default services for cluster.