Install Istio in multi master nodes in kubernetes - kubernetes

I read about Istio and I need to install it in Kubernetes.
I don't know what is the best way to install Istio in a multi-node Kubernetes cluster.
The setup is multi-node master cluster and multi-node slave for Kubernetes.
Is the best way to install with Istio multicluster or sidecar injection (automatic)?
Regards.

There is no difference on how many Master and Slave Nodes your Kubernetes cluster has if you want to install Istio.
You can follow the instructions from this link
Briefly, you need to:
Download Istio release
Install Istio’s Custom Resource Definitions using kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml from that release
Install Istio components using one of options:
without mutual TLS authentication between sidecars using kubectl apply -f install/kubernetes/istio-demo.yaml
with default mutual TLS authentication kubectl apply -f install/kubernetes/istio-demo-auth.yaml
Render Kubernetes manifest with Helm and deploy with kubectl
Use Helm and Tiller to manage the Istio deployment
For auto injection, you need to install istio-sidecar-injector component and add istio-injection=enabled label for a Namespace in which you want it to work.
Example of commands:
kubectl label namespace <namespace> istio-injection=enabled
kubectl create -n <namespace> -f <your-app-spec>.yaml

Related

Argo cd with eks fargate

Is there anyone who uses argo cd on eks fargate?? It seems that there is an issue with Argo setup on Fargate. All pods are in pending state
I’ve tried installing on argocd namespace and existing ones. Still doesn’t work
I tried to install it using the commands below:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml
Make sure you have created a fargate profile with the namespace selector as argocd. It might be one of the issues.
refer this https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-create-profile

Firewall/Port requirements for Helm 2.16

We are installing helmv2.16 on Kubernetes v1.14 in offline mode.We downloaded the tiller docker image and loaded on the server where we were installing the helm
i. No access to Internet from the application servers
ii. Limited ports connectivity between the Kubernetes master and worker nodes(No * connectivity between the servers). The ports that are opened between the application servers are -
a.10250-10260
b.6443
c.443
d.2379-2380
e.Node port series 30000-32767
f. 44134-44135
We downloaded the Helm 2.16 and installed following the below steps. The tiller pod failed to come up till we allowed ALL communication between kubernetes master and kubernetes worker nodes. This means that there are specific firewall requirements for Helm/tiller to function in a kubernetes cluster. Could someone please share the port / firewall details since we do not want to open ALL traffic even between the nodes of a cluster (rather we would open specific ports).
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --skip-refresh

Helm3 with EKS cluster

During Installation of Helm3 stable, i found Helm3 stable does not implement tiller deployment for fetching cluster details, it works as a Client utility only, my question is it, if it is not implementing tiller concept for fetching details, how does it connect with EKS.
I have already installed kubectl and it is running fine, is it something like this, helm client is dependent on kubectl service?
I performed following steps:
1.helm version
version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}
2.kubectl create serviceaccount tiller --namespace kube-system
serviceaccount/tiller created
3.notepad rbac-config.yaml
4.kubectl apply -f rbac-config.yaml
clusterrolebinding.rbac.authorization.k8s.io/tiller-role-binding created
5.helm init --service-account tiller
Error: unknown flag: --service-account
I know steps 2,3,4 are not required in Helm3, but curious to know how helm3 interacts as a client service with EKS cluster.
Just like kubectl, helm also uses kubeconfig to communicate with the cluster.
So, both kubectl and helm depend on the cluster's config file rather depending on each other.

Role of Helm install command vs kubectl command in Kubernetes cluster deployment

I have a Kubernetes cluster with 1 master node and 2 worker node. And I have another machine where I installed Helm. Actually I am trying to create Kubernetes resources using Helm chart and trying to deploy into remote Kubernetes cluster.
When I am reading about helm install command, I found that we need to use helm and kubectl command for deploying.
My confusion in here is that, when we using helm install, the created chart will deploy on Kubernetes and we can push it into chart repo also. So for deploying we are using Helm. But why we are using kubectl command with Helm?
Helm 3: No Tiller. Helm install just deploys stuff using kubectl underneath. So to use helm, you also need a configured kubectl.
Helm 2:
Helm/Tiller are client/server, helm needs to connect to tiller to initiate the deployment. Because tiller is not publicly exposed, helm uses kubectl underneath to open a tunnel to tiller. See here: https://github.com/helm/helm/issues/3745#issuecomment-376405184
So to use helm, you also need a configured kubectl. More detailed: https://helm.sh/docs/using_helm/
Chart Repo: is a different concept (same for helm2 / helm3), it's not mandatory to use. They are like artifact storage, for example in quay.io application registry you can audit who pushed and who used a chart. More detailed: https://github.com/helm/helm/blob/master/docs/chart_repository.md. You always can bypass repo and install from src like: helm install /path/to/chart/src

Kubernetes helm - Running helm install in a running pod

I want to spin up a single installer pod with helm install that once running, will apply some logic and install other applications into my cluster using helm install.
I'm aware of the helm dependencies, but I want to run some business logic with the installations and I'd rather do it in the installer pod and on the host triggering the whole installation process.
I found suggestions on using the Kubernetes REST API when inside a pod, but helm requires kubectl installed and configured.
Any ideas?
It seems this was a lot easier than I thought...
On a simple pod running Debian, I just installed kubectl, and with the default service account's secret that's already mounted, the kubectl was already configured to the cluster's API.
Note that the configured default namespace is the one that my installer pod is deployed to.
Verified with
$ kubectl cluster-info
$ kubectl get ns
I then installed helm, which was already using the kubectl to access the cluster for installing tiller.
Verified with
$ helm version
$ helm init
I installed a test chart
$ helm install --name my-release stable/wordpress
It works!!
I hope this helps
You could add kubectl to your installer pod.
"In cluster" credentials could be provided via service account in "default-token" secret: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/