Can't install confluent helm chart on minikube - kubernetes

I'm trying to use confluent helm chart following this link https://github.com/confluentinc/cp-helm-charts/tree/e17565cd5a6985a594155b12b08068cb5882e51f/charts/cp-kafka-connect but when I install it on minikube I got ImagePullBackOff
confluent-oss3-cp-control-center-5fc8c494c8-k25ps 0/1 ImagePullBackOff 0 76m
confluent-oss3-cp-kafka-0 0/2 ImagePullBackOff 0 76m
confluent-oss3-cp-kafka-connect-7849d49c47-jmmrn 0/2 ImagePullBackOff 0 76m
confluent-oss3-cp-kafka-rest-777cc4899b-zqcf9 0/2 ImagePullBackOff 0 76m
confluent-oss3-cp-ksql-server-567646677-b8lw4 0/2 ImagePullBackOff 0 76m
confluent-oss3-cp-schema-registry-6b8d69887d-5cmvt 0/2 ErrImagePull 0 76m
confluent-oss3-cp-zookeeper-0 0/2 ImagePullBackOff 0 76m
Is there any solution to fix this problem ?

I solved the issue after deleting minikube and reinstalling helm and the chart again.

Related

PostgreSQL-HA on Kubernetes recover from Volume Snapshot?

I have a Kubernetes Volume Snapshot created for pgsql-ha persistent volume backup.
Now that I'm able to recover the PVC by specifying the dataSource as the volume snapshot, and trying to create a new pgsql-ha cluster using HELM chart, then attach this PCV to recover the data. Following is the example installation command:
helm install db-ha bitnami/postgresql-ha\
--set postgresql.password=$PWD \
--set persistence.existingClaim="pvc-restore-from-snapshot"
Then the pgpol and both postgresql Pods shows CrashLoopBackOff forever.
$ kubectl get pods --watch
NAME READY STATUS RESTARTS AGE
db-ha-pgpool-gradfergr43sfxv 0/1 Running 0 8s
db-ha-postgresql-0 0/1 Init:0/1 0 8s
db-ha-postgresql-1 0/1 Init:0/1 0 8s
db-ha-postgresql-1 0/1 PodInitializing 0 23s
db-ha-postgresql-0 0/1 PodInitializing 0 23s
db-ha-postgresql-1 0/1 Error 0 24s
db-ha-postgresql-0 0/1 Error 0 24s
db-ha-postgresql-1 0/1 Error 1 25s
db-ha-postgresql-0 0/1 Error 1 25s
db-ha-postgresql-1 0/1 CrashLoopBackOff 1 26s
db-ha-postgresql-0 0/1 CrashLoopBackOff 1 27s
From what I have read so far in this issue, persistence.existingClaim is only supported when replica was set to 1, which means it can only be restored on a non-ha cluster, and pgsql-ha is currently unable to replicate the manually specified PVC.
So I'm wondering the following:
If that is the whole story, and there is nothing that I'm missing
If it's possible to modify the storageClass or even the provisioner (ebs-csi), so that the existing PVC can be used
If other workaround exist for this workflow
Many thanks!

Delete all the pods created by applying Helm2.13.1

I'm new to Helm. I'm trying to deploy a simple server on the master node. When I do helm install and see the details using the command kubectl get po,svc I see lot of pods created other than the pods I intend to deploy.So, My precise questions are:
Why so many pods got created?
How do I delete all those pods?
Below is the output of the command kubectl get po,svc:
NAME READY STATUS RESTARTS AGE
pod/altered-quoll-stx-sdo-chart-6446644994-57n7k 1/1 Running 0 25m
pod/austere-garfish-stx-sdo-chart-5b65d8ccb7-jjxfh 1/1 Running 0 25m
pod/bald-hyena-stx-sdo-chart-9b666c998-zcfwr 1/1 Running 0 25m
pod/cantankerous-pronghorn-stx-sdo-chart-65f5699cdc-5fkf9 1/1 Running 0 25m
pod/crusty-unicorn-stx-sdo-chart-7bdcc67546-6d295 1/1 Running 0 25m
pod/exiled-puffin-stx-sdo-chart-679b78ccc5-n68fg 1/1 Running 0 25m
pod/fantastic-waterbuffalo-stx-sdo-chart-7ddd7b54df-p78h7 1/1 Running 0 25m
pod/gangly-quail-stx-sdo-chart-75b9dd49b-rbsgq 1/1 Running 0 25m
pod/giddy-pig-stx-sdo-chart-5d86844569-5v8nn 1/1 Running 0 25m
pod/hazy-indri-stx-sdo-chart-65d4c96f46-zmvm2 1/1 Running 0 25m
pod/interested-macaw-stx-sdo-chart-6bb7874bbd-k9nnf 1/1 Running 0 25m
pod/jaundiced-orangutan-stx-sdo-chart-5699d9b44b-6fpk9 1/1 Running 0 25m
pod/kindred-nightingale-stx-sdo-chart-5cf95c4d97-zpqln 1/1 Running 0 25m
pod/kissing-snail-stx-sdo-chart-854d848649-54m9w 1/1 Running 0 25m
pod/lazy-tiger-stx-sdo-chart-568fbb8d65-gr6w7 1/1 Running 0 25m
pod/nonexistent-octopus-stx-sdo-chart-5f8f6c7ff8-9l7sm 1/1 Running 0 25m
pod/odd-boxer-stx-sdo-chart-6f5b9679cc-5stk7 1/1 Running 1 15h
pod/orderly-chicken-stx-sdo-chart-7889b64856-rmq7j 1/1 Running 0 25m
pod/redis-697fb49877-x5hr6 1/1 Running 0 25m
pod/rv.deploy-6bbffc7975-tf5z4 1/2 CrashLoopBackOff 93 30h
pod/sartorial-eagle-stx-sdo-chart-767d786685-ct7mf 1/1 Running 0 25m
pod/sullen-gnat-stx-sdo-chart-579fdb7df7-4z67w 1/1 Running 0 25m
pod/undercooked-cow-stx-sdo-chart-67875cc5c6-mwvb7 1/1 Running 0 25m
pod/wise-quoll-stx-sdo-chart-5db8c766c9-mhq8v 1/1 Running 0 21m
You can run the command helm ls to see all the deployed helm releases in your cluster.
To remove the release (and every resource it created, including the pods), run: helm delete RELEASE_NAME --purge.
If you want to delete all the pods in your namespace without your Helm release (I DON'T think this is what you're looking for), you can run: kubectl delete pods --all.
On a side note, if you're new to Helm, consider starting with Helm v3 since it has many improvements, and specially because the migration from v2 to v3 can become cumbersome, and if you can avoid it - you should.

kubernetes UnexpectedAdmissionError after rollout

I had a service failing to reply to some HTTP requests, digging it's logs it seemed to be some sort of DNS failure on reaching a proxy service
'proxy' failed to resolve 'proxy.default.svc.cluster.local' after 2 queries
So I could not find anything wrong and tried kubectl rollout restart deployment/backend.
Just after that these appeared in the pods list:
backend-54769cbb4-xkwf2 0/1 UnexpectedAdmissionError 0 4h6m
backend-54769cbb4-xlpgf 0/1 UnexpectedAdmissionError 0 4h4m
backend-54769cbb4-xmnr5 0/1 UnexpectedAdmissionError 0 4h7m
backend-54769cbb4-xmq5n 0/1 UnexpectedAdmissionError 0 4h7m
backend-54769cbb4-xphrw 0/1 UnexpectedAdmissionError 0 4h5m
backend-54769cbb4-xrmrq 0/1 UnexpectedAdmissionError 0 4h1m
backend-54769cbb4-xrmw8 0/1 UnexpectedAdmissionError 0 4h6m
backend-54769cbb4-xt4ck 0/1 UnexpectedAdmissionError 0 4h4m
backend-54769cbb4-xws8r 0/1 UnexpectedAdmissionError 0 4h6m
backend-54769cbb4-xx6r4 0/1 UnexpectedAdmissionError 0 4h5m
backend-54769cbb4-xxpfd 0/1 UnexpectedAdmissionError 0 4h6m
backend-54769cbb4-xzjql 0/1 UnexpectedAdmissionError 0 4h4m
backend-54769cbb4-xzzlk 0/1 UnexpectedAdmissionError 0 4h7m
backend-54769cbb4-z46ms 0/1 UnexpectedAdmissionError 0 4h5m
backend-54769cbb4-z4sl7 0/1 UnexpectedAdmissionError 0 4h6m
backend-54769cbb4-z6jpj 0/1 UnexpectedAdmissionError 0 4h5m
backend-54769cbb4-z6ngq 0/1 UnexpectedAdmissionError 0 4h6m
backend-54769cbb4-z8w4h 0/1 UnexpectedAdmissionError 0 4h5m
backend-54769cbb4-z9jqb 0/1 UnexpectedAdmissionError 0 4h3m
backend-54769cbb4-zbvqm 0/1 UnexpectedAdmissionError 0 4h2m
backend-54769cbb4-zcfxg 0/1 UnexpectedAdmissionError 0 4h3m
backend-54769cbb4-zcvqm 0/1 UnexpectedAdmissionError 0 4h6m
backend-54769cbb4-zf2f8 0/1 UnexpectedAdmissionError 0 4h2m
backend-54769cbb4-zgnkh 0/1 UnexpectedAdmissionError 0 4h7m
backend-54769cbb4-zhdr8 0/1 UnexpectedAdmissionError 0 4h2m
backend-54769cbb4-zhx6g 0/1 UnexpectedAdmissionError 0 4h7m
backend-54769cbb4-zj8f2 0/1 UnexpectedAdmissionError 0 4h3m
backend-54769cbb4-zjbwp 0/1 UnexpectedAdmissionError 0 4h5m
backend-54769cbb4-zjc8g 0/1 UnexpectedAdmissionError 0 4h3m
backend-54769cbb4-zjdcp 0/1 UnexpectedAdmissionError 0 4h4m
backend-54769cbb4-zkcrb 0/1 UnexpectedAdmissionError 0 4h7m
backend-54769cbb4-zlpll 0/1 UnexpectedAdmissionError 0 4h2m
backend-54769cbb4-zm2cx 0/1 UnexpectedAdmissionError 0 4h6m
backend-54769cbb4-zn7mr 0/1 UnexpectedAdmissionError 0 4h6m
backend-54769cbb4-znjkp 0/1 UnexpectedAdmissionError 0 4h3m
backend-54769cbb4-zpnk7 0/1 UnexpectedAdmissionError 0 4h2m
backend-54769cbb4-zrrl7 0/1 UnexpectedAdmissionError 0 4h2m
backend-54769cbb4-zsdsz 0/1 UnexpectedAdmissionError 0 4h4m
backend-54769cbb4-ztdx8 0/1 UnexpectedAdmissionError 0 4h2m
backend-54769cbb4-ztln6 0/1 UnexpectedAdmissionError 0 4h2m
backend-54769cbb4-ztplg 0/1 UnexpectedAdmissionError 0 4h6m
backend-54769cbb4-ztzfh 0/1 UnexpectedAdmissionError 0 4h2m
backend-54769cbb4-zvb8g 0/1 UnexpectedAdmissionError 0 4h6m
backend-54769cbb4-zwsr8 0/1 UnexpectedAdmissionError 0 4h7m
backend-54769cbb4-zwvxr 0/1 UnexpectedAdmissionError 0 4h5m
backend-54769cbb4-zwx6h 0/1 UnexpectedAdmissionError 0 4h6m
backend-54769cbb4-zz4bf 0/1 UnexpectedAdmissionError 0 4h1m
backend-54769cbb4-zzq6t 0/1 UnexpectedAdmissionError 0 4h2m
(and many more of these)
So I added two more nodes, and now everything seems fine except for this big list of pods in an error state which I don't understand. What is this UnexpectedAdmissionError, and what should I do about it?
Note: this is a DigitalOcean cluster
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T12:38:36Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
The following seems important: kubectl describe one_failed_pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m51s default-scheduler Successfully assigned default/backend-549f576d5f-xzdv4 to std-16gb-g7mo
Warning UnexpectedAdmissionError 2m51s kubelet, std-16gb-g7mo Update plugin resources failed due to failed to write checkpoint file "kubelet_internal_checkpoint": write /var/lib/kubelet/device-plugins/.543592130: no space left on device, which is unexpected.
I had the same issue, while describing one of the pods with UnexpectedAdmissionError I saw the following:
Update plugin resources failed due to failed to write deviceplugin checkpoint file "kubelet_internal_checkpoint": write /var/lib/kubelet/device-plugins/.525608957: no space left on device, which is unexpected.
when doing describing node:
OutOfDisk Unknown Tue, 30 Jun 2020 14:07:04 -0400 Tue, 30 Jun 2020 14:12:05 -0400 NodeStatusUnknown Kubelet stopped posting node status.
I resolved this by rebooting node
Because the pod was not even started you can't actually check the logs. However describing the pod provided me with the error. We had some disk/cpu/memory utilization issues with worker5 node.
kubectl get pods | grep -i err
kube-system coredns-autoscaler-79599b9dc6-6l8s8 0/1 UnexpectedAdmissionError 0 10h <none> worker5 <none> <none>
kube-system coredns-autoscaler-79599b9dc6-kzt9z 0/1 UnexpectedAdmissionError 0 10h <none> worker5 <none> <none>
kube-system coredns-autoscaler-79599b9dc6-tgkrc 0/1 UnexpectedAdmissionError 0 10h <none> worker5 <none> <none>
kubectl describe pod -n kube-system coredns-autoscaler-79599b9dc6-kzt9z
Reason: UnexpectedAdmissionError
Message: Pod Allocate failed due to failed to write checkpoint file "kubelet_internal_checkpoint": mkdir /var: file exists, which is unexpected
First step was rebooting the node which fixed the issue. The reason was we had restored some backups to the new cluster and restore process caused this issue.
For the pods because they were a part of replica set, they got spawned on other worker nodes. Therefore we deleted the pods.
A quick way to delete a lot of pods, you can use:
kubectl get pods -n namespace | grep -i Error | cut -d' ' -f 1 | xargs kubectl delete pod
To delete all the erroraneous pods in entire cluster
kubectl get pods -A | grep -i Error | awk '{print $2}' | xargs kubectl delete pod
You can use flag -A/--all-namespaces to get pods from all namespaces in the cluster.
However if they are not getting spawned automatically which would be weird, you can run kubectl replace
kubectl get pod coredns-autoscaler-79599b9dc6-6l8s8 -n kube-system -o yaml | kubectl replace --force -f -
For further a verbose read please refer kubectl replace --help and the following blog

Volume is already attached by pod

I install kubernetes on ubuntu on baremetal. I deploy 1 master and 3 worker.
and then deploy rook and every thing work fine.but when i want to deploy a wordpress on it ,it stuck in container creating and then i delete wordpress and now i got this error
Volume is already attached by pod
default/wordpress-mysql-b78774f44-gvr58. Status Running
#kubectl describe pods wordpress-mysql-b78774f44-bjc2c
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m21s default-scheduler Successfully assigned default/wordpress-mysql-b78774f44-bjc2c to worker2
Warning FailedMount 2m57s (x6 over 3m16s) kubelet, worker2 MountVolume.SetUp failed for volume "pvc-dcba7817-553b-11e9-a229-52540076d16c" : mount command failed, status: Failure, reason: Rook: Mount volume failed: failed to attach volume pvc-dcba7817-553b-11e9-a229-52540076d16c for pod default/wordpress-mysql-b78774f44-bjc2c. Volume is already attached by pod default/wordpress-mysql-b78774f44-gvr58. Status Running
Normal Pulling 2m26s kubelet, worker2 Pulling image "mysql:5.6"
Normal Pulled 110s kubelet, worker2 Successfully pulled image "mysql:5.6"
Normal Created 106s kubelet, worker2 Created container mysql
Normal Started 101s kubelet, worker2 Started container mysql
for more information
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-dcba7817-553b-11e9-a229-52540076d16c 20Gi RWO Delete Bound default/mysql-pv-claim rook-ceph-block 13m
pvc-e9797517-553b-11e9-a229-52540076d16c 20Gi RWO Delete Bound default/wp-pv-claim rook-ceph-block 13m
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound pvc-dcba7817-553b-11e9-a229-52540076d16c 20Gi RWO rook-ceph-block 15m
wp-pv-claim Bound pvc-e9797517-553b-11e9-a229-52540076d16c 20Gi RWO rook-ceph-block 14m
#kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default wordpress-595685cc49-sdbfk 1/1 Running 6 9m58s
default wordpress-mysql-b78774f44-bjc2c 1/1 Running 0 8m14s
kube-system coredns-fb8b8dccf-plnt4 1/1 Running 0 46m
kube-system coredns-fb8b8dccf-xrkql 1/1 Running 0 47m
kube-system etcd-master 1/1 Running 0 46m
kube-system kube-apiserver-master 1/1 Running 0 46m
kube-system kube-controller-manager-master 1/1 Running 1 46m
kube-system kube-flannel-ds-amd64-45bsf 1/1 Running 0 40m
kube-system kube-flannel-ds-amd64-5nxfz 1/1 Running 0 40m
kube-system kube-flannel-ds-amd64-pnln9 1/1 Running 0 40m
kube-system kube-flannel-ds-amd64-sg4pv 1/1 Running 0 40m
kube-system kube-proxy-2xsrn 1/1 Running 0 47m
kube-system kube-proxy-mll8b 1/1 Running 0 42m
kube-system kube-proxy-mv5dw 1/1 Running 0 42m
kube-system kube-proxy-v2jww 1/1 Running 0 42m
kube-system kube-scheduler-master 1/1 Running 0 46m
rook-ceph-system rook-ceph-agent-8pbtv 1/1 Running 0 26m
rook-ceph-system rook-ceph-agent-hsn27 1/1 Running 0 26m
rook-ceph-system rook-ceph-agent-qjqqx 1/1 Running 0 26m
rook-ceph-system rook-ceph-operator-d97564799-9szvr 1/1 Running 0 27m
rook-ceph-system rook-discover-26g84 1/1 Running 0 26m
rook-ceph-system rook-discover-hf7lc 1/1 Running 0 26m
rook-ceph-system rook-discover-jc72g 1/1 Running 0 26m
rook-ceph rook-ceph-mgr-a-68cb58b456-9rrj7 1/1 Running 0 21m
rook-ceph rook-ceph-mon-a-6469b4c68f-cq6mj 1/1 Running 0 23m
rook-ceph rook-ceph-mon-b-d59cfd758-2d2zt 1/1 Running 0 22m
rook-ceph rook-ceph-mon-c-79664b789-wl4t4 1/1 Running 0 21m
rook-ceph rook-ceph-osd-0-8778dbbc-d84mh 1/1 Running 0 19m
rook-ceph rook-ceph-osd-1-84974b86f6-z5c6c 1/1 Running 0 19m
rook-ceph rook-ceph-osd-2-84f9b78587-czx6d 1/1 Running 0 19m
rook-ceph rook-ceph-osd-prepare-worker1-x4rqc 0/2 Completed 0 20m
rook-ceph rook-ceph-osd-prepare-worker2-29jpg 0/2 Completed 0 20m
rook-ceph rook-ceph-osd-prepare-worker3-rkp52 0/2 Completed 0 20m
You are using a standard class storage for your PVC, and your policy will be ReadWriteOnce. This does not mean you can only connect your PVC to one pod, but only to one node.
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadWriteMany – the volume can be mounted as read-write by many nodes
Here, seems like you have 2 pods trying to mount this volume. This will be flakey unless you do one of two things:
Schedule both pods on the same node
Use other storageClasses such as NFS (FileSystem) to change policy to ReadWriteMany
Downscale to 1 pod, so you don't have to share the volume
Right now you have 2 pods trying to mount the same volume, default/wordpress-mysql-b78774f44-gvr58 and default/wordpress-mysql-b78774f44-bjc2c.
You can also downscale to 1 pod, so you don't have to worry about any of the above altogether:
kubectl scale deploy wordpress-mysql --replicas=1

Istio Deployments Not happening

I am trying to install istio in a minikube cluster
I followed the tutorial on this page https://istio.io/docs/setup/kubernetes/quick-start/
I am trying to use Option 1 : https://istio.io/docs/setup/kubernetes/quick-start/#option-1-install-istio-without-mutual-tls-authentication-between-sidecars
I can see that the services have been created but the deployment seems to have failed.
kubectl get pods -n istio-system
No resources found
How can i troubleshoot this ?
Here are the results of get deployment
kubectl get deployment -n istio-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
grafana 1 0 0 0 4m
istio-citadel 1 0 0 0 4m
istio-egressgateway 1 0 0 0 4m
istio-galley 1 0 0 0 4m
istio-ingressgateway 1 0 0 0 4m
istio-pilot 1 0 0 0 4m
istio-policy 1 0 0 0 4m
istio-sidecar-injector 1 0 0 0 4m
istio-telemetry 1 0 0 0 4m
istio-tracing 1 0 0 0 4m
prometheus 1 0 0 0 4m
servicegraph 1 0 0 0 4m
This is what worked for me. Don't use the --extra-configs while starting minikube. This is crashing kube-controller-manager-minikube as its not able to find the file
error starting controllers: failed to start certificate controller:
error reading CA cert file "/var/lib/localkube/certs/ca.crt": open
/var/lib/localkube/certs/ca.crt: no such file or directory
Just start minikube with this command. I have minikube V0.30.0.
minikube start
Output:
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
170.78 MB / 170.78 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubelet v1.10.0
Downloading kubeadm v1.10.0
Finished Downloading kubeadm v1.10.0
Finished Downloading kubelet v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
Pointing to istio-1.0.4 folder, run this command
kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
This should install all the required crds
Run this command
kubectl apply -f install/kubernetes/istio-demo.yaml
After successful creation of rules, services, deployments etc., Run this command
kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-9cfc9d4c9-h2zn8 1/1 Running 0 5m
istio-citadel-74df865579-d2pbq 1/1 Running 0 5m
istio-cleanup-secrets-ghlbf 0/1 Completed 0 5m
istio-egressgateway-58df7c4d8-4tg4p 1/1 Running 0 5m
istio-galley-8487989b9b-jbp2d 1/1 Running 0 5m
istio-grafana-post-install-dn6bw 0/1 Completed 0 5m
istio-ingressgateway-6fc88db97f-49z88 1/1 Running 0 5m
istio-pilot-74bb7dcdd-xjgvz 0/2 Pending 0 5m
istio-policy-58878f57fb-t6fqt 2/2 Running 0 5m
istio-security-post-install-vqbzw 0/1 Completed 0 5m
istio-sidecar-injector-5cfcf6dd86-lr8ll 1/1 Running 0 5m
istio-telemetry-bf5558589-8hzcc 2/2 Running 0 5m
istio-tracing-ff94688bb-bwzfs 1/1 Running 0 5m
prometheus-f556886b8-9z6vp 1/1 Running 0 5m
servicegraph-55d57f69f5-fvqbg 1/1 Running 0 5m
Also try customizing the installation via Helm template. You don't need to have Tiller deployed for that.
1) kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
2) helm template install/kubernetes/helm/istio --name istio --namespace istio-system --set grafana.enabled=true --set servicegraph.enabled=true --set tracing.enabled=true --set kiali.enabled=true --set sidecarInjectorWebhook.enabled=true --set global.tag=1.0.5 > $HOME/istio.yaml
3) kubectl create namespace istio-system
4) kubectl apply -f $HOME/istio.yaml