RBAC for kubectl and helm - kubernetes-helm

Can I create RBAC for kubectl and helm command. Requirement is particular set of user can only run kubectl describe command and other can run apply/create/delete command.

Yes, you define Role/ClusterRole objects and bind them to individual users and ServiceAccounts with RoleBinding/ClusterRoleBinding objects. It is described at length in the Kubernetes documentation
When considering helm use v3. Helm version 3 is a little friendlier than version 2 in this space: version 2 uses an in-cluster management component (Tiller) which needs special RBAC setup, but in version 3 it just runs with the credentials and permissions of the user running the command.

Related

How can I tell if server-side apply is enabled in my Kubernetes cluster?

The page on server-side apply in the Kubernetes docs suggests that it can be enabled or disabled (e.g., the docs say, "If you have Server Side Apply enabled ...").
I have a GKE cluster and I would like to check if server-side apply is enabled. How can I do this?
You can try creating any object like namespace or so and try checking the YAML output using the command you will get an idea if SSA is enabled or not.
Command :
kubectl create ns test-ssa
Get the created namespace
kubectl get ns test-ssa -o yaml
If there is managedFields existing in output SSA is working.
Server-side-apply i think introduced around K8s version 1.14 and now it's in GA with k8s version 1.22. Wiht GKE i have noticed it's already been part of it alpha or beta.
If you are using the HELM on your GKE you might have noticed the Service Side Apply.

How to find the associated service account for Helm?

Prior to Helm 3, it was possible to associate a service account in helm initialization via
helm init --service-account tiller
But since helm init is now deprecated, how can we find out which service account is the Helm associated with?
Helm 3 will have the same permissions according to the default config in ~/.kube/config or another config if specified in your system environment variable $KUBECONFIG or overridden using the following command options
--kube-context string name of the kubeconfig context to use
--kubeconfig string path to the kubeconfig file
With Tiller gone, the security model for Helm is radically simplified. Helm 3 now supports all the modern security, identity, and authorization features of modern Kubernetes. Helm’s permissions are evaluated using your kubeconfig file. Cluster administrators can restrict user permissions at whatever granularity they see fit. — Changes since Helm 2: Removal of Tiller

Find usage of ServceAccount in Kubernates cluster

I am running a Kubernates Cluster in bare metal of three nodes.
I have applied a couple of yaml files of different services.
Now I would like to make order in the cluster and clean some orphaned kube objects.
To do that I need to understand the set of pods or other entities which use or refer a certain ServiceAccount.
For example, I can dig ClusterRoleBinding of the, say, admin-user and investigate it:
kubectl get сlusterrolebinding admin-user
But is there a good kubectl options combination to find all the usages/references of some ServiceAccount?
You can list all resources using a service account with the following command:
kubectl get rolebinding,clusterrolebinding --all-namespaces -o jsonpath='{range .items[?(#.subjects[0].name=="YOUR_SERVICE_ACCOUNT_NAME")]}[{.roleRef.kind},{.roleRef.name}];{end}' | tr ";" "\n"
You just need to replace YOUR_SERVICE_ACCOUNT_NAME to the one you are investigating.
I tested this command on my cluster and it works.
Let me know if this solution helped you.
Take a look at this project. After installing via homebrew or krew you can use it find a service account and look at its role, scope, source. It does not tell which pods are referring to it but still a useful tool.
rbac-lookup serviceaccountname --output wide --kind serviceaccount

Tool to check YAML files for Kubernetes offline

Is there some tool available that could tell me whether a K8s YAML configuration (to-be-supplied to kubectl apply) is valid for the target Kubernetes version without requiring a connection to a Kubernetes cluster?
One concrete use-case here would be to detect incompatibilities before actual deployment to a cluster, just because some already-deprecated label has been finally dropped in a newer Kubernetes version, e.g. as has happened for Helm and the switch to Kubernetes 1.16 (see Helm init fails on Kubernetes 1.16.0):
Dropped:
apiVersion: extensions/v1beta1
New:
apiVersion: apps/v1
I want to check these kind of incompatibilities within a CI system, so that I can reject it before even attempting to deploy it.
just run below command to validate the syntax
kubectl create -f <yaml-file> --dry-run
In fact the dry-run option is to validate the YAML syntax and the object schema. You can grab the output into a variable and if there is no error then rerun the command without dry-run
You could use kubeval
https://kubeval.instrumenta.dev/
I don't think kubectl support client-side only validation yet (02/2022)

kubernetes create cluster with logging and monitoring for ubuntu

I'm setting up a kubernetes cluster on digitalocean ubuntu machines. I got the cluster up and running following this get started guide ubuntu. During the setup the ENABLE_NODE_LOGGING, ENABLE_CLUSTER_LOGGING and ENABLE_CLUSTER_DNS variables are set to true in the config-default.sh.
However there is no controller, services created for elasticsearch/kabana. I did have to run the deployAddon.sh manually for the skydns, do I need to do the same for logging and monitoring ? or am I missing something in the default configuration.
By default the logging and monitoring services are not in the default namespace.
You should be able to see if the services are running with kubectl cluster-info.
To look at the individual services/controllers, specify the kube-system namespace:
kubectl get service --namespace=kube-system
By default, logging and monitor is not enabled if you are installing kubernetes on ubuntu machines. It looks like someone has copied the config-default.sh script from some other folder, hence the variables ENABLE_NODE_LOGGING and ENABLE_CLUSTER_LOGGING are copied but are not used to bring up the relevant logging deployments and services.
As #Jon Mumm said, kubectl cluster-info gives you the info. But if you want to install the logging service, go to
kubernetes/cluster/addons/fluentd-elasticsearch
and run
kubectl create -f es-controller.yaml -f es-service.yaml -f kibana-controller.yaml -f kibana-service.yaml
with right setup. Change the yaml files to suit your configuration and ensure kubectl is in your path.
Update 1: This will bring up kibana and logstash services