Firewall/Port requirements for Helm 2.16 - kubernetes-helm

We are installing helmv2.16 on Kubernetes v1.14 in offline mode.We downloaded the tiller docker image and loaded on the server where we were installing the helm
i. No access to Internet from the application servers
ii. Limited ports connectivity between the Kubernetes master and worker nodes(No * connectivity between the servers). The ports that are opened between the application servers are -
a.10250-10260
b.6443
c.443
d.2379-2380
e.Node port series 30000-32767
f. 44134-44135
We downloaded the Helm 2.16 and installed following the below steps. The tiller pod failed to come up till we allowed ALL communication between kubernetes master and kubernetes worker nodes. This means that there are specific firewall requirements for Helm/tiller to function in a kubernetes cluster. Could someone please share the port / firewall details since we do not want to open ALL traffic even between the nodes of a cluster (rather we would open specific ports).
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --skip-refresh

Related

Installing telepresence with a pod security policy

I'm trying to install telepresence into a EKS cluster that has PodSecurityPolicy's. I've gotten the traffic manager installed by running helm on the traffic manager chart:
helm install traffic-manager -n ambassador datawire/telepresence --create-namespace
After that I modify the traffic-manager-ambassador clusterRole to use one of the cluster PodSecurityPolicy's. Installation of the traffic manager eventually succeeds after I do this. However the installation of the uninstall-agent job fails:
Error creating: pods "uninstall-agents-" is forbidden: PodSecurityPolicy: unable to admit pod: []
My question is - what role or clusterRole do I have to modify to allow helm to uninstall telepresence? Or how do I figure out what service account is being used to try and install the pod so I can give it access to a pod security policy?
I made some fixes at https://github.com/ddl-pjohnson/telepresence/pull/1/files to make it easier to add additional rules and to run the helm hook as the correct user.

How to restore accidentally deleted a kube-proxy DaemonSet in a Kubernetes cluster?

I accidentally deleted kube-proxy daemonset by using command: kubectl delete -n kube-system daemonset kube-proxy which should run kube-proxy pods in my cluster, what the best way to restore it?
That's how it should look
Kubernetes allows you to reinstall kube-proxy by running the following command which install the kube-proxy addon components via the API server.
$ kubeadm init phase addon kube-proxy --kubeconfig ~/.kube/config --apiserver-advertise-address string
This will generate the output as
[addons] Applied essential addon: kube-proxy
The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
Hence kube-proxy will be reinstalled in the cluster by creating a DaemonSet and launching the pods.
kube-proxy daemon got created at the time of cluster creation, so you need to write your own manifest for daemon-set unless you have a backup to restore it from there.

Helm3 with EKS cluster

During Installation of Helm3 stable, i found Helm3 stable does not implement tiller deployment for fetching cluster details, it works as a Client utility only, my question is it, if it is not implementing tiller concept for fetching details, how does it connect with EKS.
I have already installed kubectl and it is running fine, is it something like this, helm client is dependent on kubectl service?
I performed following steps:
1.helm version
version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}
2.kubectl create serviceaccount tiller --namespace kube-system
serviceaccount/tiller created
3.notepad rbac-config.yaml
4.kubectl apply -f rbac-config.yaml
clusterrolebinding.rbac.authorization.k8s.io/tiller-role-binding created
5.helm init --service-account tiller
Error: unknown flag: --service-account
I know steps 2,3,4 are not required in Helm3, but curious to know how helm3 interacts as a client service with EKS cluster.
Just like kubectl, helm also uses kubeconfig to communicate with the cluster.
So, both kubectl and helm depend on the cluster's config file rather depending on each other.

Install Istio in multi master nodes in kubernetes

I read about Istio and I need to install it in Kubernetes.
I don't know what is the best way to install Istio in a multi-node Kubernetes cluster.
The setup is multi-node master cluster and multi-node slave for Kubernetes.
Is the best way to install with Istio multicluster or sidecar injection (automatic)?
Regards.
There is no difference on how many Master and Slave Nodes your Kubernetes cluster has if you want to install Istio.
You can follow the instructions from this link
Briefly, you need to:
Download Istio release
Install Istio’s Custom Resource Definitions using kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml from that release
Install Istio components using one of options:
without mutual TLS authentication between sidecars using kubectl apply -f install/kubernetes/istio-demo.yaml
with default mutual TLS authentication kubectl apply -f install/kubernetes/istio-demo-auth.yaml
Render Kubernetes manifest with Helm and deploy with kubectl
Use Helm and Tiller to manage the Istio deployment
For auto injection, you need to install istio-sidecar-injector component and add istio-injection=enabled label for a Namespace in which you want it to work.
Example of commands:
kubectl label namespace <namespace> istio-injection=enabled
kubectl create -n <namespace> -f <your-app-spec>.yaml

adding node port to an exciting istio service

I created a local kubernetes cluster with a master and 2 workers using VM(ubuntu 16.04)
I am also using calico for networking and I am exploring istio for the moment.
my problem is the ingress load balancer doesn't get an external IP. to my understanding I should use a node port to access the ingress load balancer but I can find how to do so.
should I have done it when installing, can I add it now and how?
kubernetes version : v1.11.1
calico version : v3.1
istio version : 0.8.0
If don't have a service attached to your deployment, you can use kubectl expose:
kubectl expose deployment istio --type=NodePort --name=istio-service
If you already deployed a service, you can edit the service spec and add type: "NodePort" the quickest way to do this is using kubectl patch:
kubectl patch svc istio-service -p '{"spec":{"type":"NodePort"}}'
More info about NodePort services can be found here