Installing telepresence with a pod security policy - kubernetes

I'm trying to install telepresence into a EKS cluster that has PodSecurityPolicy's. I've gotten the traffic manager installed by running helm on the traffic manager chart:
helm install traffic-manager -n ambassador datawire/telepresence --create-namespace
After that I modify the traffic-manager-ambassador clusterRole to use one of the cluster PodSecurityPolicy's. Installation of the traffic manager eventually succeeds after I do this. However the installation of the uninstall-agent job fails:
Error creating: pods "uninstall-agents-" is forbidden: PodSecurityPolicy: unable to admit pod: []
My question is - what role or clusterRole do I have to modify to allow helm to uninstall telepresence? Or how do I figure out what service account is being used to try and install the pod so I can give it access to a pod security policy?

I made some fixes at https://github.com/ddl-pjohnson/telepresence/pull/1/files to make it easier to add additional rules and to run the helm hook as the correct user.

Related

cannot helm install rabbitmq servers (helm 2.16.9) : namespaces "rabbit" is forbidden

helm install --name my-rabbitserver stable/rabbitmq --namespace rabbit
Error: release my-rabbitserver failed: namespaces "rabbit" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "rabbit"
I have tried with (and without a rabbit namespace created before the install attempt)
I am using helm 2.16.9, so I need to qualify the name of my installation with a --name
I am using this against a Google Cloud kubernetes cluster
It looks as though the Helm tiller pod did not have sufficient priveldeges.
I found this similar issue:
https://support.sumologic.com/hc/en-us/articles/360037704393-Kubernetes-Helm-install-fails-with-Error-namespaces-sumologic-is-forbidden-User-system-serviceaccount-kube-system-default-cannot-get-resource-namespaces-in-API-group-in-the-namespace-sumologic-
Basically I have to stop the tiller deployment, set up tiller ServiceAccount yaml and run it to give tiller access to the kube-system. And then execute helm init again with the new service account.
The helm rabbitmq installs then appear work as advertised
I thought helm was supposed to make life easier, but it still has its own limitations and additional yaml files to get it to work as advertised.

Firewall/Port requirements for Helm 2.16

We are installing helmv2.16 on Kubernetes v1.14 in offline mode.We downloaded the tiller docker image and loaded on the server where we were installing the helm
i. No access to Internet from the application servers
ii. Limited ports connectivity between the Kubernetes master and worker nodes(No * connectivity between the servers). The ports that are opened between the application servers are -
a.10250-10260
b.6443
c.443
d.2379-2380
e.Node port series 30000-32767
f. 44134-44135
We downloaded the Helm 2.16 and installed following the below steps. The tiller pod failed to come up till we allowed ALL communication between kubernetes master and kubernetes worker nodes. This means that there are specific firewall requirements for Helm/tiller to function in a kubernetes cluster. Could someone please share the port / firewall details since we do not want to open ALL traffic even between the nodes of a cluster (rather we would open specific ports).
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --skip-refresh

Helm3 with EKS cluster

During Installation of Helm3 stable, i found Helm3 stable does not implement tiller deployment for fetching cluster details, it works as a Client utility only, my question is it, if it is not implementing tiller concept for fetching details, how does it connect with EKS.
I have already installed kubectl and it is running fine, is it something like this, helm client is dependent on kubectl service?
I performed following steps:
1.helm version
version.BuildInfo{Version:"v3.1.0", GitCommit:"b29d20baf09943e134c2fa5e1e1cab3bf93315fa", GitTreeState:"clean", GoVersion:"go1.13.7"}
2.kubectl create serviceaccount tiller --namespace kube-system
serviceaccount/tiller created
3.notepad rbac-config.yaml
4.kubectl apply -f rbac-config.yaml
clusterrolebinding.rbac.authorization.k8s.io/tiller-role-binding created
5.helm init --service-account tiller
Error: unknown flag: --service-account
I know steps 2,3,4 are not required in Helm3, but curious to know how helm3 interacts as a client service with EKS cluster.
Just like kubectl, helm also uses kubeconfig to communicate with the cluster.
So, both kubectl and helm depend on the cluster's config file rather depending on each other.

how to use digital ocean's kubernetes and set auto-scale?

I'm working on kubernetes. Now I tried Digital Ocean's kubernetes which is very easy to install and access, but how can I install metric-server in it? how can I auto scale in kubernetes by DO?
Please reply as soon as possible.
The Metrics Server can be installed to your cluster with Helm:
https://github.com/helm/charts/tree/master/stable/metrics-server
helm init
helm upgrade --install metrics-server --namespace=kube-system stable/metrics-server
with RBAC enabled, see the more comprehensive instructions for installing Helm into your cluster:
https://github.com/helm/helm/blob/master/docs/rbac.md
If you wish to deploy without Helm, the manifests are available from the GitHub repository:
https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy/1.8%2B

Kubernetes helm - Running helm install in a running pod

I want to spin up a single installer pod with helm install that once running, will apply some logic and install other applications into my cluster using helm install.
I'm aware of the helm dependencies, but I want to run some business logic with the installations and I'd rather do it in the installer pod and on the host triggering the whole installation process.
I found suggestions on using the Kubernetes REST API when inside a pod, but helm requires kubectl installed and configured.
Any ideas?
It seems this was a lot easier than I thought...
On a simple pod running Debian, I just installed kubectl, and with the default service account's secret that's already mounted, the kubectl was already configured to the cluster's API.
Note that the configured default namespace is the one that my installer pod is deployed to.
Verified with
$ kubectl cluster-info
$ kubectl get ns
I then installed helm, which was already using the kubectl to access the cluster for installing tiller.
Verified with
$ helm version
$ helm init
I installed a test chart
$ helm install --name my-release stable/wordpress
It works!!
I hope this helps
You could add kubectl to your installer pod.
"In cluster" credentials could be provided via service account in "default-token" secret: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/