How to restart deployment in Kubernetes helm? - kubernetes

I deployed the consul client on my k8s cluster using helm .
sudo helm install hi-consul hashicorp/consul -n consul-client -f config.yaml
One of the pods is not working now. Is there a way to restart that pod in the helm?
Thanks

You can delete the POD with kubectl delete <POD name> -n <Namespace name>
If you want all pods to be restarted you can use the kubectl rollout restart deployment <deployment name> -n <Namespace name>

Related

How to uninstall ArgoCD from Kubernetes cluster?

I've installed ArgoCD on my kubernetes cluster using
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Now, how to remove it from the cluster totally?
You can delete the entire installation using this - kubectl delete -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Reference

How to reset K3s cluster pods

I have a k3s cluster with following pods:
kube-system pod/calico-node-xxxx
kube-system pod/calico-kube-controllers-xxxxxx
kube-system pod/metrics-server-xxxxx
kube-system pod/local-path-provisioner-xxxxx
kube-system pod/coredns-xxxxx
How can I reset (stop and start the pods again) the pods either with command (kubectl maybe) or any script?
To reset a pod, you can just delete it. If it's managed by deployment (pods in your question should be), they should be recreated automatically.
kubectl delete pod <pod-name> <pod2-name> ... -n <namespace>
If the pods you want to reset, have common label, you can filter them with --selector flag
kubectl delete pods --selector=<label-name>=<label-value> -n <namespace>
However, if you changed the deployments somehow, you will need to apply the unmodified manifest.
kubectl apply -f <yaml-file>
Warning: - This will reset your whole cluster and delete all running data.
This is not the exact answer but best answer. take 1 min only.
Just uninstall by running below command
sudo /usr/local/bin/k3s-uninstall.sh
Then install a fresh cluster with below command
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable=traefik" sh -
Then export var using below command
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
Also it may complain about some k3s config file access so
sudo chmod 444 /etc/rancher/k3s/k3s.yaml

Clean up Traefik CRDs

I've run a helm delete for my Traefik install on Kubernetes however I'm still seeing CRDs in the cluster.
How do you get rid of these?
CRDs can be deleted just as any other object in Kubernetes: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#delete-a-customresourcedefinition
kubectl get <crd-name> -o yaml > crd.yaml
kubectl delete -f crd.yaml

Using kubectl to rollout openshift DeploymentConfig

I'm using openshift container cluster to run my project.
In my CI I'm using helm and kubectl to upgrade and rollout the deployments.
Following this guide, I have created this simple DeploymentConfig:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
...
When I run helm upgrade --install I can see the new deployment in my openshift cluster.
But I want to rollout the deployment using kubectl and it fails:
helm upgrade --install --wait --namespace myapp nginx chart/
kubectl rollout status -n myapp -w "dc/nginx"
I'm getting this error error: no king "DeploymentConfig" is registered for version "apps.openshift.io/v1" in scheme "k8s.io/kubernetes/pkg/kubectl/scheme/scheme.go:28"
Running kubectl api-versions does display "apps.openshift.io/v1" though.
Why can't I rollout the deployment using kubectl?
Kubernetes' command line interface (CLI), kubectl, is used to run commands against Kubernetes cluster, while DeploymentConfigsis specific to OpenShift distributions, and not available in standard Kubernetes.
Though, as long as oc is built on top of kubectl, converting a kubectl binary to oc is as simple as changing the binary’s name from kubectl to oc.
See more information for using kubectl and oc

How to uninstall or remove policy-demo in Calico

new to calico, trying to secure Kubernetes cluster using calico.
I have installed kubectl using command curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl referring the docs here.
I tried to install Calico Kubernetes NetworkPolicy using kubectl commands by refereeing here : kubectl create ns policy-demo creating namespace
This is then followed by creating nginx pod and services:
kubectl run --namespace=policy-demo nginx --replicas=2 --image=nginx
kubectl expose --namespace=policy-demo deployment nginx --port=80
Now, I want to uninstall and remove the policy-demo and namespace from the system.
Is there is ant way I can do it and remove this from my system using command?
How can I uninstall and remove the policy-demo?
There is a simple way of doing it.
You just need to use kubectl delete ns policy-demo to remove, clean and will delete the policy-demo namespace from the system using kubectl command.
It is already mentioned here in the end of the document: https://docs.projectcalico.org/v2.6/getting-started/kubernetes/tutorials/simple-policy