How to uninstall or remove policy-demo in Calico - project-calico

new to calico, trying to secure Kubernetes cluster using calico.
I have installed kubectl using command curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl referring the docs here.
I tried to install Calico Kubernetes NetworkPolicy using kubectl commands by refereeing here : kubectl create ns policy-demo creating namespace
This is then followed by creating nginx pod and services:
kubectl run --namespace=policy-demo nginx --replicas=2 --image=nginx
kubectl expose --namespace=policy-demo deployment nginx --port=80
Now, I want to uninstall and remove the policy-demo and namespace from the system.
Is there is ant way I can do it and remove this from my system using command?
How can I uninstall and remove the policy-demo?

There is a simple way of doing it.
You just need to use kubectl delete ns policy-demo to remove, clean and will delete the policy-demo namespace from the system using kubectl command.
It is already mentioned here in the end of the document: https://docs.projectcalico.org/v2.6/getting-started/kubernetes/tutorials/simple-policy

Related

How to uninstall ArgoCD from Kubernetes cluster?

I've installed ArgoCD on my kubernetes cluster using
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Now, how to remove it from the cluster totally?
You can delete the entire installation using this - kubectl delete -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Reference

Unknown image flag when creating deployment using Minikube kubectl

I am getting unknown image flag when creating a deployment using minikube on windows 10 cmd. Why?
C:\WINDOWS\system32>minikube kubectl create deployment nginxdepl --image=nginx
Error: unknown flag: --image
See 'minikube kubectl --help' for usage.
C:\WINDOWS\system32>
When using kubectl bundled with minikube the command is little different.
From the documentation, your command should be:
minikube kubectl -- create deployment nginxdepl --image=nginx
The difference is the -- right after kubectl
there problem is your command. you are mixing kubectl and minikube.
minikube is for managing your one-node local dev cluster.
kubectl is used for interacting with your cluster.
you should be using the following command:
kubectl create deployment nginxdepl --image nginx

Using kubectl to rollout openshift DeploymentConfig

I'm using openshift container cluster to run my project.
In my CI I'm using helm and kubectl to upgrade and rollout the deployments.
Following this guide, I have created this simple DeploymentConfig:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
...
When I run helm upgrade --install I can see the new deployment in my openshift cluster.
But I want to rollout the deployment using kubectl and it fails:
helm upgrade --install --wait --namespace myapp nginx chart/
kubectl rollout status -n myapp -w "dc/nginx"
I'm getting this error error: no king "DeploymentConfig" is registered for version "apps.openshift.io/v1" in scheme "k8s.io/kubernetes/pkg/kubectl/scheme/scheme.go:28"
Running kubectl api-versions does display "apps.openshift.io/v1" though.
Why can't I rollout the deployment using kubectl?
Kubernetes' command line interface (CLI), kubectl, is used to run commands against Kubernetes cluster, while DeploymentConfigsis specific to OpenShift distributions, and not available in standard Kubernetes.
Though, as long as oc is built on top of kubectl, converting a kubectl binary to oc is as simple as changing the binary’s name from kubectl to oc.
See more information for using kubectl and oc

Uninstall istio (all components) completely from kubernetes cluster

I installed istio using these commands:
VERSION = 1.0.5
GCP = gcloud
K8S = kubectl
#$(K8S) apply -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
#$(K8S) apply -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
#$(K8S) get pods -n istio-system
#$(K8S) label namespace default istio-injection=enabled
#$(K8S) get svc istio-ingressgateway -n istio-system
Now, how do I completely uninstall it including all containers/ingress/egress etc (everthing installed by istio-demo-auth.yaml?
Thanks.
If you used istioctl, it's pretty easy:
istioctl x uninstall --purge
Of course, it would be easier if that command were listed in istioctl --help...
Reference: https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio
Based on their documentation here, you can generate all specs as yml file then pipe it to simple kubectl's delete operation
istioctl manifest generate <your original installation options> | kubectl delete -f -
here's an example:
istioctl manifest generate --set profile=default | kubectl delete -f -
A drawback of this approach though is to remember all options you have used when you installed istio which might be quite hard to remember especially if you enabled specific components.
If you have installed istio using helm's chart, you can uninstall it easily
First, list all installed charts:
helm list -n istio-system
NAME NAMESPACE REVISION UPDATED STATUS
istiod istio-system 1 2020-03-07 15:01:56.141094 -0500 EST deployed
and then delete/uninstall the chart using the following syntax:
helm delete -n istio-system --purge istio-system
helm delete -n istio-system --purge istio-init
...
Check their website for more information on how to do this.
If you already installed istio using istioctl or helm in its own separate namespace, you can easily delete completely that namespace which will in turn delete all resources created inside it.
kubectl delete namespace istio-system
Just run kubectl delete for the files you applied.
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
You can find this in docs as well.
If you have installed it as described, then you will need to delete it in the same way.
kubectl delete -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
Then you would manually delete the folder, and istioctl, if you moved to anywhere.
IMPORTANT: Deleting a namespace is super comfortable to clean up, but you can't do it for all scenarios. In this situation, if you delete the namespace only, you are leaving all the permissions and credentials intact. Now, say you want to update Istio, and Istio team has made some security changes in their RBAC rules, but has not changed the name of the object. You would deploy the new yaml file, and it will throw an error saying the object (for example clusterrolebinding) already exists. If you don't pay attention to what that error was, you can end up with the worse type of errors (when there are no error, but something goes wrong).
Cleaning up Istio is a bit tricky, because of all the things it adds: CustomResourceDefinitions, ConfigMaps, MutatingWebhookConfigurations, etc. Just deleting the istio-system namespace is not sufficient. The safest bet is to use the uninstall instructions from istio.io for the method you used to install.
Kubectl: https://istio.io/docs/setup/kubernetes/install/kubernetes/#uninstall
Helm: https://istio.io/docs/setup/kubernetes/install/helm/#uninstall
When performing these steps, use the version of Istio you are attempting to remove. So if you are trying to remove Istio 1.0.2, grab that release from istio.io.
Don't forget to disable the injection:
kubectl delete -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
kubectl label default your-namespace istio-injection=disabled
Using the profile you used in installation, demo for example, run the following command
istioctl manifest generate --set profile=demo | kubectl delete -f -
After normal istio uninstall (depending on the way istio was installed by helm or istioctl) following steps can be performed
Check if anything still exists in the istio-system namespace, if exists then delete manually, also remove the istio-system namespace
Check if there is a sidecar associated with any pod (sometimes sidecars not get cleaned up in case of failed uninstallation)
kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.namespace}{"\t"}{..image}{"\n\n"}{end}' -A | grep 'istio/proxyv' | grep -v istio-system
Get the CRD that is still in use and remove associated resources
kubectl get crds | grep 'istio.io' | cut -f1-1 -d "." | xargs -n1 -I{} bash -c " echo {} && kubectl get --all-namespaces {} -o wide && echo -e '---'"
Delete all the CRD
kubectl get crds | grep 'istio.io' | xargs -n1 -I{} sh -c "kubectl delete crd {}"
Edit the labels back (optional)
kubectl label default <namespace name> istio-injection=disabled
Just delete the ns
k delete ns istio-system
Deleting CRDs without needing to find the helm charts:
kubectl delete crd -l chart=istio
Hi if you installated via helm-template you can use these commands :
For CRD's:
$ helm template ${ISTIO_BASE_DIR}/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl delete -f -
$ kubectl delete crd $(kubectl get crd |grep istio)
For Deployment/NS..etc other resources:
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system\
--values install/kubernetes/helm/istio/values-istio-demo.yaml \
--set global.controlPlaneSecurityEnabled=true \
--set global.mtls.enabled=true | kubectl delete -f -

Enable Istio in fission

I have a Kubernetes (v1.10) cluster with Istio installed, I'm trying to install fission following Enabling Istio on Fission guide. when i run
[![helm install --namespace $FISSION_NAMESPACE --set enableIstio=true --name istio-demo
https://github.com/fission/fission/releases/download/0.9.1/fission-all-0.9.1.tgz
It throws error saying
Error: the server has asked for the client to provide credentials
(My cluster has two nodes and one master created using kubespray all ubuntu 16.04 machines)
I think that error is probably an authentication failure between helm and the cluster. Are you able to run kubectl version? How about helm ls?
If you have follow up questions, could you ask them on the fission slack? You'll get quicker answers there.
I think problem with helm
Solution
Remove .helm folder
rm -rf .helm
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account=tiller
kubectl get pods -n kube-system