Helm RBAC rules to create namespaces and resources inside those created name - kubernetes

I found a lot of information on how to give helm permission to create resources in a particular namespace.
I am trying to see if I can create namespaces on the fly(with random names) and then use helm to install and delete resources inside the namespace.
My idea is to create a namespace with name such as Fixedsuffix-randomprefix and then allow helm to create all resources inside it. Is this possible ?
I can create a clusterrole and clusterrolebinding to allow tiller's serviceaccount to create namespaces, but I am not able to figure out how to have a serviceaccount that could create resources in the particular namespace( mainly because this serviceaccount to create resources cant would have to be created when the namespace is created and then assigned to tiller pod).
TIA

My question is why would you create sa, clusterrole and rolebinding to do that? Helm has it´s own resources which allow him to install and delete resources inside new namespace.
My idea is to create a namespace with name such as Fixedsuffix-randomprefix and then allow helm to create all resources inside it. Is this possible ?
Yes, you can create your new namespace and use helm to install everything in this namespace.Or even better you can just use helm install and it will create new namespace for you. For that purpose helm have helm install --namespace.
-n, --namespace string namespace scope for this request
For example you can install traefik chart in namespace tla.
helm install stable/traefik --namespace=tla
NAME: oily-beetle
LAST DEPLOYED: Tue Mar 24 07:33:03 2020
NAMESPACE: tla
STATUS: DEPLOYED
Another idea which came to my mind is you might want tiller not to use cluster-admin credentials, then this link could help.

Related

How to install istio mutating webhook and istiod first ahead of other pods in Helm?

I am trying to use Helm 3 to install Kubeflow 1.3 with Istio 1.9 on Kubernetes 1.16. Kubeflow does not provide official Helm chart so I figured it out by myself.
But Helm does not guarantee order. Pods of other deployments and statefulsets could be up before Istio mutating webhook and istiod are up. For example, if A pod is up earlier without istio-proxy, B pod is up later with a istio-proxy, they cannot communicate with each other.
Are there any simple best practices so I can work this out as expected each time I deploy? That is say, make sure my installation with Helm is atomic?
Thank you in advance.
UPDATE:
I tried for three ways:
mark resources as pre-install, post-install, etc.
using subcharts
decouple one chart into several charts
And I adopted the third. The issue of the first is that helm hook is designed for Job, a resource could be marked as helm hook but it would not be deleted when using helm uninstall since a resource cannot hold two helm hooks at the same time(key conflict in annotations). The issue of the second is that helm installs subcharts and charts at the same time, helm call hooks of subcharts and charts at the same time as well.
Helm does not guarantee order.
Not completely. Helm collects all of the resources in a given Chart and it's dependencies, groups them by resource type, and then installs them in the following order:
Namespace
NetworkPolicy
ResourceQuota
LimitRange
PodSecurityPolicy
PodDisruptionBudget
ServiceAccount
Secret
SecretList
ConfigMap
StorageClass
PersistentVolume
PersistentVolumeClaim
CustomResourceDefinition
ClusterRole
ClusterRoleList
ClusterRoleBinding
ClusterRoleBindingList
Role
RoleList
RoleBinding
RoleBindingList
Service
DaemonSet
Pod
ReplicationController
ReplicaSet
Deployment
HorizontalPodAutoscaler
StatefulSet
Job
CronJob
Ingress
APIService
Additionally:
That is say, make sure my installation with Helm is atomic
you should to know that:
Helm does not wait until all of the resources are running before it exits.
You generally have no control over the order if you are using Helm. You can try to use Init Containers to validate your pods to check if they have all dependencies before they run. You can read more about it here. Another workaround will be to install a health check to make sure everything is okay. If not, it will restart until it is successful.
See also:
this article about checking your helm deployments.
question Helm Subchart order of execution in an umbrella chart with good explanation
this question
related topic on github

Running a Pod from another Pod in the same kubernetes namespace

I am building an application which should execute tasks in a separate container/pods.
this application would be running in a specific namespace the new pods must be created in the same namespace as well.
I understand we can similar via custom CRD and Operators, but I found it is overly complicated and we need Golang knowledge for the same.
Is there any way this could be achived without having to learn Operators and GoLang?
I am ok to use kubctl or api within my container and wanted to connect the host and to the same namespace.
Yes, this is certainly possible using a ServiceAccount and then connecting to the API from within the Pod.
First, create a ServiceAccount in your namespace using
kubectl create serviceaccount my-service-account
For your newly created ServiceAccount, give it the permissions you want using Roles and RoleBindings. The subject would be something like this:
subjects:
- kind: ServiceAccount
name: my-service-account
namespace: my-namespace
Then, add the ServiceAccount to the Pod from where you want to create other Pods from (see documentation). Credentials are automatically mounted inside the Pod using automountServiceAccountToken.
Now from inside the Pod you can either use kubectl or call the API using the credentials inside the Pod. There are libraries for a lot of programming languages to talk to Kubernetes, use those.

How do I tell helm to create internal secrets in namespace

When trying to run helm install to deploy an application to a private K8S cluster, I get the following error:
helm install myapp ./myapp
Error: create: failed to create: secrets is forbidden: User "u-user1"
cannot create resource "secrets" in API group "" in the namespace "default"
exit status 1
I know that this is happening because helm creates secrets behind the scene to hold information that it needs for managing the deployment. See Handling Secrets:
As of Helm v3, the release definition is stored as a Kubernetes Secret resource by default, as opposed to a ConfigMap.
The problem is that helm is trying to create the secrets in the default namespace, and I'm working in a private cloud and not allowed to create resources in the default namespace.
How can I tell helm to use a namespace when creating the internal secrets that it needs to use?
Searching for a solution
A search on the helm site found:
https://helm.sh/docs/faq/ - which says
In Helm 3, information about a particular release is now stored in the same namespace as the release itself
But I've set the deployment to be in the desired namespace. My myapp/templates/deployment.yaml file has:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: myapp-namespace
So I'm not sure how to tell helm to create it's internal secrets in this myapp-namespace.
Other Searches
Helm Charts create secrets in different namespace - Is asking a different question about how to create user defined secrets in different namespaces.
Helm upgrade is creating multiple secrets - Different question, and no answer (yet).
Secret management in Helm Charts - is asking a different question.
Update 1)
When searching for a solution I tried adding the --namespace myapp-namespace argument to the helm install command (see below).
helm install --namespace myapp-namespace myapp ./myapp
Error: create: failed to create: secrets is forbidden: User "u-user1"
cannot create resource "secrets" in API group "" in the namespace "myapp-namespace"
exit status 1
Notice that the namespace is now myapp-namespace, so I believe that helm is now creating the internal secrets in my desired namespace, so I think this answers my original question.
I think I now have a permissions issue that I need to ask the K8S admins to address.
You must use the --namespace option in order to tell helm install what namespace you are using. The syntax you specified is correct.
helm install --namespace myapp-namespace myapp ./myapp
You could also put --namespace at the end of the command as below:
helm install myapp ./myapp --namespace myapp-namespace
With this syntax, helm will create the internal secrets in the namespace you've specified.
Doing this will prevent the default namespace from being polluted.
The following command is then needed to see the install.
helm list --namespace myapp-namespace
helm list --all-namespaces

Unable to scrape other namespaces when using kube-prometheus

Prometheus deployed using kube-prometheus can't scrape resources on namespaces other than default, monitoring and kube-system. I added additional namespaces on my jsonnet as described in kube-prometheus README but no success...
I also tried to create a new ServiceMonitor manually, but no success...
I appreciate any help.
Thanks.
If you used the pre-compiled manifests here you will only have your service account with 3 rolebindings allowing access to the namespaces you mentioned.
You can add more namespaces for example by applying the same roleBinding in more namespaces.
This is more secure as opposed to using a clusterRoleBinding since it allows for more finegrained permissions.

Kubernetes RBAC role for tiller

We have multiple development teams who work and deploy their applications on kuberenetes. We use helm to deploy our application on kubernetes.
Currently the challenge we are facing with one of our shared clusters. We would like to deploy tiller separate for each team. So they have access to their resources. default Cluster-admin role will not help us and we don't want that.
Let's say we have multiple namespaces for one team. I would want to deploy tiller which has permission to work with resources exist or need to be created in these namespaces.
Team > multiple namespaces
tiller using the service account that has the role ( having full access to namespaces - not all ) associated with it.
I would want to deploy tiller which has permission to work with resources exist or need to be created in these namespaces
According to the fine manual, you'll need a ClusterRole per team, defining the kinds of operations on the kinds of resources, but then use a RoleBinding to scope those rules to a specific namespace. The two ends of the binding target will be the team's tiller's ServiceAccount and the team's ClusterRole, and then one RoleBinding instance per Namespace (even though they will be textually identical except for the namespace: portion)
I actually would expect you could make an internal helm chart that would automate the specifics of that relationship, and then helm install --name team-alpha --set team-namespaces=ns-alpha,ns-beta my-awesome-chart and then grant your tiller cluster-admin or whatever more restrictive ClusterRole you wish.