K8s RBAC needed when no API calls? - kubernetes

My pod is running with the default service account. My pod uses secrets through mounted files and config maps but this is defined in yaml and the pod does not contain kubectl or similar component.
Is there a point of using RBAC for anything if I don't call the API? The best practices state "Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means)."

Only things that call the Kubernetes API, like the kubectl command and the various Kubernetes SDK libraries, use RBAC. For your basic application, you as the user need permission to create deployments, create secrets, etc. but if you have cluster-administrator permissions you don't need anything special setup.
You could imagine an orchestrator application that wanted to farm out work by creating Kubernetes Jobs. In this case the orchestrator itself would need an RBAC setup; typically its Helm chart or other deployment YAML would contain a Role (to create Jobs), a ServiceAccount, and a RoleBinding, and set its own Deployment to run using that ServiceAccount. This isn't the "normal" case of a straightforward HTTP-based application (Deployment/Service/Ingress) with a backing database (StatefulSet/Service).
... restrict reading data in Secrets ...
If you can kubectl get secret -o yaml then the Secret values are all but there to read; they are base64 encoded but not encrypted at all. It's good practice to limit the ability to do this. This having been said, you can also create a Pod, mounting the Secret, and make the main container command be to dump out the Secret value to somewhere readable, so even then Secrets aren't that secret. It's still a good practice, but not required per se, particularly in an evaluation or test cluster.

Related

List secrets used by certain pod in K8s

I'd like to know if kubectl offers an easy way to list all the secrets that a certain pod/deployment/statefulset is using, or if there is some way to cleanly retrieve this info. When doing a kubectl describe for a pod, I see I can get a list of mounted volumes which include the ones that come from secrets that I could extract using jq and the like, but this way feels a bit clumsy. I have been searching a bit to no avail. Do you know if there is anything like that around? Perhaps using the API directly?
To List all Secrets currently in use by a pod use:
kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
In the other hand if you want to access to stored secrets in the API:
Kubernetes Secrets are, by default, stored unencrypted in the API
server's underlying data store (etcd). Anyone with API access can
retrieve or modify a Secret, and so can anyone with access to etcd.
Additionally, anyone who is authorized to create a Pod in a namespace
can use that in order to safely use Secrets, take at least the
following steps:
Enable Encryption at Rest for Secrets.
Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means).
Where appropriate, also use mechanisms such as RBAC to limit which principals are allowed to create new Secrets or replace existing
ones.access to read any Secret in that namespace; this includes
indirect access such as the ability to create a Deployment.
If you want more information about secrets in kubernetes, follow this link.

Deploy from CI/CD via HELM to external Kubernetes cluster with limited rights

I understand that I can copy my .kube/config to my CI/CD server, or just name the ServiceAccount to allow my CD pipeline to use HELM for deployment.
However, what if I want to allow deployment via Helm, but restrict a lot of other access, like:
reading data from pods or a deployed database
port-forward services
... so basically accessing all data in the cluster, except for stateless Docker containers deployed via Helm.
Would it be possible to create a new ClusterRole with limited rights? What verbs in a ClusterRole does Helm need at least to function properly?
What rights does Helm need at the least?
It comes down to what your Helm chart is doing to Kubernetes.
ClusterRoles can be bound to a particular namespace through reference in a RoleBinding. The admin, edit and view default ClusterRoles are commonly used in this manner. For more detailed info see this description. For example edit is a default ClusterRole which allows read/write access to most objects in a namespace. It does not allow viewing or modifying Roles or RoleBindings; and granting a user cluster-admin access at the namespace scope provides full control over every resource in the namespace, including the namespace itself.
You can also restrict a user's access to a particular namespace by using either the edit or the admin role. See this example.
The permissions strategy could also depend on what objects will be created by the installation. The user will need all access to those API objects that will be managed by helm installations. Using RBAC Authorization has this concept explained in more detail with several examples that you could use as a reference. Also, this source would be helpful.

Getting my k8s microservice application's non-namespaced dependancies

If I have a microservice app within a namespace, I can easily get all of my namespaced resources within that namespace using the k8s api. I cannot, however, view what non-namespaced resources are being used by the microservice app. If I want to see my non-namespaced resources, I can only see them all at once, with no indication of which ones are dependancies in the microservice app.
How can I find my dependancies related to my application? I'd like to be able to get reference to things like PersistentVolumes, StorageClasses, ClusterRoles, etc. that are being used by the app's namespaced resources.
Your code, running in a pod container inside a namespace, runs using a serviceaccount set using pod.spec.serviceAccountName.
If not set, it'll run using the default serviceaccount.
You need to create a clusterRole in order to grant access to cluster-wide resources specific verbs, then in the pod namespace assign this clusterRole to the serviceaccount, via a roleBinding targetting the clusterRole create before.
Then your pod, using a kubernetes client, and using the "in-cluster config" auth method, will be able to query the apiserver to get/list/watch/delete/patch... the said cluster-wide resources.
This is a definitely a non-trivial task because of the many ways such dependency can come into play: whenever an object "uses" another one, there we could identify a dependency. The issue is that this "use" relation can take many forms: e.g., a Pod can reference a Volume in its definition (which would be a sort of direct dependency), but can also use a PersistentVolumeClaim which would then instantiate a PV through use of a StorageClass -- and these relations are only known to Kubernetes at run time, when the YAML definitions are applied.
In other words:
To chase dependencies, you would have to inspect the YAML description of resources in-use, knowing the semantics of each: there's no single depends: value in each but one would need to follow e.g., the spec.storageClass of a PVC, the spec.volumes: of a Pod, etc.
In some cases, this would not even be enough: e.g., for matching Services and Pods this would not even be enough, as one would have to match ports on each side.
All of this would need to be done by extracting YAML from a running K8s cluster, since some relations between resources would not be known until they are instantiated.
You could check How do you visualise dependencies in your Kubernetes YAML files? article by Daniele Polencic shows a few tools that can be used to visualize dependencies:
There isn't any static tool that analyses YAML files. But you can visualise your dependencies in the cluster with Weave Scope, KubeView or tracing the traffic with Istio.

Do I need Namespace, Secret, ServiceAccount, and ConfigMap in my ingress.yaml?

I am practicing k8s from katacoda. Currently I am working on ingress.yaml. This is chapter has extra kind of services come to the yaml file. They are Namespace, Secret, ServiceAccount, and ConfigMap.
For Secret I can read on other chapter to understand it later.
Questions:
Do I need to use Namespace, ServiceAccount, and ConfigMap in my ingress.yaml?
Suppose I Caddy to make https. Secret from the example is a hardcode. How can I achieve automatically renew after certain period?
Do I need to use Namespace, ServiceAccount, and ConfigMap in my ingress.yaml?
No, it's not required. They are different Kubernetes resources and can be created independently. But you can place several resource definition in one YAML file just for convenience.
Alternatively you can create separate YAML file for each resource and place them all in the same directory. After that one of the following commands can be used to create resources in bulk:
kubectl create -f project/k8s/development
kubectl create -f project/k8s/development --recursive
Namespace is just a placeholder for Kubernetes resources. It should be created before any other resources use it.
ServiceAccount is used as a security context to restrict permission for specific automation operations.
ConfigMap is used as a node independent source of configuration/file/environment for pods.
Suppose I Caddy to make https. Secret from the example is a hardcode. How can I achieve automatically renew after certain period?
Not quite clear question, but I believe you can use cert-manager for that.
cert-manager is quite popular solution to manage certificates for Kubernetes cluster.

Kubernetes RBAC: How to allow exec only to a specific Pod created by Deployment

I have an application namespace with 30 services. Most are stateless Deployments, mixed with some StatefulSets etc. Fairly standard stuff that is.
I need to grant a special user a Role that can only exec into certain Pod. Currently RBAC grants the exec right to all pods in the namespace, but I need to tighten it down.
The problem is Pod(s) are created by a Deployment configurator, and the Pod name(s) are thus "generated", configurator-xxxxx-yyyyyy. Since you cannot use glob (ie. configurator-*), and Role cannot grant exec for Deployments directly.
So far I've thought about:
Converting Deployment into StatefulSet or a plain Pod, so Pod would have a known non-generated name, and glob wouldn't be needed
Moving the Deployment into separate namespace, so the global exec right is not a problem
Both of these work, but neither is optimal. Is there a way to write a proper Role for this?
RBAC, as it is meant by now, doesn't allow to filter resources by other attributes than namespace and resource name. The discussion is open here.
Thus, namespaces are the smallest piece at authorizing access to pods. Services must be separated in namespaces thinking in what users could need access to them.
The optimal solution right now is to move this deployment to another namespace since it needs different access rules than other deployments in the original namespace.