How to find the associated service account for Helm? - kubernetes

Prior to Helm 3, it was possible to associate a service account in helm initialization via
helm init --service-account tiller
But since helm init is now deprecated, how can we find out which service account is the Helm associated with?

Helm 3 will have the same permissions according to the default config in ~/.kube/config or another config if specified in your system environment variable $KUBECONFIG or overridden using the following command options
--kube-context string name of the kubeconfig context to use
--kubeconfig string path to the kubeconfig file
With Tiller gone, the security model for Helm is radically simplified. Helm 3 now supports all the modern security, identity, and authorization features of modern Kubernetes. Helm’s permissions are evaluated using your kubeconfig file. Cluster administrators can restrict user permissions at whatever granularity they see fit. — Changes since Helm 2: Removal of Tiller

Related

For how long should I keep the storage driver Secret in my cluster?

I'm using helm 3.4.2 for upgrade my charts to my AKS cluster and I saw that every time I deploy something new, it creates a new secret called sh.helm.v... This is the first time I'm using helm.
I was reading the doc and found that at version 3.x helm is using secrets to store driver as default. Cool, but every time I deploy it creates a new secret and I'm now sure if this is the best to keep it all in my cluster.
Soo, should I keep then all in my cluster? Like, every time I deploy some thing, it creates a secret and live there
or
Can I remove the last before? Like, deploy v5 now and erase v1, v2, v3 and keep the v4 and v5 for some reason. If it's ok to do it, does anyone has a clue for how to do it? Using a bash ou kubectl?
Thanks a lot!
So yes, There are few major changes in Helm3, comparing to Helm2.
Secrets are now used as the default storage driver
In Helm 3, Secrets are now used as the default storage driver. Helm 2 used ConfigMaps by default to store release information. In
Helm 2.7.0, a new storage backend that uses Secrets for storing
release information was implemented, and it is now the default
starting in Helm 3.
Also
Release Names are now scoped to the Namespace
In Helm 3, information about a particular release is now stored in the
same namespace as the release itself. With this greater alignment to
native cluster namespaces, the helm list command no longer lists all
releases by default. Instead, it will list only the releases in the
namespace of your current kubernetes context (i.e. the namespace shown
when you run kubectl config view --minify). It also means you must
supply the --all-namespaces flag to helm list to get behaviour similar
to Helm 2.
Soo, should I keep then all in my cluster? Like, every time I deploy
some thing, it creates a secret and live there or
Can I remove the last before?
I dont thinks its a good practice to remove anything manually. If it is not mandatory necessary - sure better not touch them. However, you can delete unused ones, if you sure you will not need old revisions in the future.
#To check all secretes were created by helm:
kubectl get secret -l "owner=helm" --all-namespaces
#To delete revision you can simply remove appropriate secret..
kubectl delete secret -n <namespace> <secret-name>
Btw(just FYI), taking into an account the fact Helm3 is scoped to namespaces - you can simply delete deployment by deleting its corresponding namespace
And the last remark, maybe it would be useful for: you can pass --history-max to helm upgrade to
limit the maximum number of revisions saved per release. Use 0 for no
limit (default 10)

Dynamically refresh pods on secrets update on kubernetes while using helm chart

I am creating deployment,service manifest files using helm charts, also secrets by helm but separately not with deployments and service.
secretes are being loaded as env variables on pod level.
we are looking to refresh or restart PODs when we update secrets with new content.
Kubernetes does not itself support this feature at the moment and there is feature in the works (https://github.com/kubernetes/kubernetes/issues/22368).
You can use custom solution available to achieve the same and one of the popular ones include Reloader.

Specifying K8s namesapce for Gitlab runner

I have a Gitlab runner using a K8s executor. But when running the pipeline I am getting below error
Checking for jobs... received job=552009999
repo_url=https://gitlab.com/deadbug/rns.git runner=ZuT1t3BJ
WARNING: Namespace is empty, therefore assuming 'default'. job=552009999 project=18763260
runner=ThT1t3BJ
ERROR: Job failed (system failure): secrets is forbidden: User "deadbug" cannot create resource
"secrets" in API group "" in the namespace "default" duration=548.0062ms job=552009999
From the error message, I undestand the namespace needs to be updated. I specified namespace in the Gitlab variables
But after this also, pipeline is failing with the above error message. How do I change the namespace for the runner ?
This seems to be linked to the permissions of the service account rather than the namespace directly. If you use GitLab's Kubernetes integration, you should not override the namespace, as GitLab will create one for you.
Make sure the service account you added to GitLab has the correct role. From https://docs.gitlab.com/ee/user/project/clusters/add_remove_clusters.html:
When GitLab creates the cluster, a gitlab service account with cluster-admin privileges is created in the default namespace to manage the newly created cluster
You may be having the same issue I was having. Instead of installing the Gitlab Runner into the existing Kubernetes cluster with helm install, I used helm template and another manager to install it (kapp). This breaks the logic in the Helm template that specifies the namespace as the one used in the helm install (See code). This led the runner to attempt to create the pods in the default namespace, instead of the namespace I created. I was able to specify it manually in my values.yml file though:
runners:
namespace: my-namespace

How does Helm keep track of which Kubernetes cluster it installs to?

If I am using kubectx and switch kube config contexts into another cluster e.g. "Production" and run a helm uninstall, how does Helm know which cluster I am referring to?
If I run the helm list command is it only referring to what's installed on my local machine and not per Kubernetes cluster?
Helm will default to using whatever your current Kubernetes context is, as specified in the $HOME/.kube/config file.
There is standard support in the Kubernetes API libraries to read data out of this file (or an alternative specified by a $KUBECONFIG environment variable). If you're writing Go, see the documentation for the k8s.io/client-go/tools/clientcmd package. While kubectx does a bunch of things, its core uses that API to do essentially the same thing as running kubectl config use-context ....
If you want Helm to use a non-default context, there is a global option to specify it:
kubectx production
helm list
kubectx development
helm --kube-context production list

RBAC for kubectl and helm

Can I create RBAC for kubectl and helm command. Requirement is particular set of user can only run kubectl describe command and other can run apply/create/delete command.
Yes, you define Role/ClusterRole objects and bind them to individual users and ServiceAccounts with RoleBinding/ClusterRoleBinding objects. It is described at length in the Kubernetes documentation
When considering helm use v3. Helm version 3 is a little friendlier than version 2 in this space: version 2 uses an in-cluster management component (Tiller) which needs special RBAC setup, but in version 3 it just runs with the credentials and permissions of the user running the command.