I recently built an EKS cluster w/ Terraform. In doing that, I Created 3 different roles on the cluster, all mapped to the same AWS IAM Role (the one used to create the cluster)
Now, when I try to manage the cluster, RBAC seems to use the least privileged of these (which I made a view role) that only has read-only access.
Is there anyway to tell config to use the admin role instead of view?
I'm afraid I've hosed this cluster and may need to rebuild it.
Some into
You don't need to create a mapping in K8s for an IAM entity that created an EKS cluster, because by default it will be mapped to "system:masters" K8s group automatically. So, if you want to give additional permissions in a K8s cluster, just map other IAM roles/users.
In EKS, IAM entities are used authentication and K8s RBAC are for authorization purposes. The mapping between them is set in aws-auth configMap in kube-system namespace,
Back to the question
I'm not sure, why K8s may have mapped that IAM user to the least privileged K8s user - it may be the default behaviour (bug?) or due to the mapping record (for view perms) being later in the CM, so it just re-wrote the previous mapping.
Any way, there is no possibility to specify a K8s user to use with such mapping.
Also, if you used eksctl to spin up the cluster, you may try creating a new mapping as per docs, but not sure if that will work.
Some reading reference: #1, #2
Related
We can check the service accounts in Kubernetes Cluster. Likewise, Is it possible to check the existing users and groups of my Kubernetes cluster with Cluster Admin privileges. If yes then how ? If no then why ?
NOTE: I am using EKS
Posting this as a community wiki, feel free to edit and expand.
This won't answer everything, however there are some concepts and ideas.
In short words there's no easy way. It's not possible to do using kubernetes itself. Reason for this is:
All Kubernetes clusters have two categories of users: service accounts
managed by Kubernetes, and normal users.
It is assumed that a cluster-independent service manages normal users
in the following ways:
an administrator distributing private keys
a user store like Keystone or Google Accounts
a file with a list of usernames and passwords
In this regard, Kubernetes does not have objects which represent normal
user accounts. Normal users cannot be added to a cluster through an
API call.
Source
More details and examples from another answer on SO
As for EKS part which is mentioned, it should be done using AWS IAM in connection to kubernetes RBAC. Below articles about setting up IAM roles in kubernetes cluster. Same way it will be possible to find which role has cluster admin permissions:
Managing users or IAM roles for your cluster
provide access to other IAM users and roles
If another tool is used for identity managing, it should be used (e.g. LDAP)
We are working on provisioning our service using Kubernetes and the service needs to register/unregister some data for scaling purposes. Let's say the service handles long-held transactions so when it starts/scales out, it needs to store the starting and ending transaction ids somewhere. When it scales out further, it will need to find the next transaction id and save it with the ending transaction id that is covered. When it scales in, it needs to delete the transaction ids, etc. ETCD seems to make the cut as it is used (by Kubernetes) to store deployment data and not only that it is close to Kubernetes, it is actually inside and maintained by Kubernetes; thus we'd like to find out if that is open for our use. I'd like to ask the question for both EKS, AKS, and self-installed. Any advice welcome. Thanks.
Do not use the kubernetes etcd directly for an application.
Access to read/write data to the kubernetes etcd store is root access to every node in your cluster. Even if you are well versed in etcd v3's role based security model avoid sharing that specific etcd instance so you don't increase your clusters attack surface.
For EKS and GKE, the etcd cluster is hidden in the provided cluster service so you can't break things. I would assume AKS takes a similar approach unless they expose the instances to you that run the management nodes.
If the data is small and not heavily updated, you might be able to reuse the kubernetes etcd store via the kubernetes API. Create a ConfigMap or a custom resource definition for your data and edit it via the easily securable and namespaced functionality in the kubernetes API.
For most application uses run your own etcd cluster (or whatever service) to keep Kubernetes free to do it's workload scheduling. The coreos etcd operator will let you define and create new etcd clusters easily.
Kubernetes RBAC can be used to give permissions to a subject in a particular Namespace. Can the same be accomplished with Cloud IAM?
Not at the moment, no. IAM is used to assign and verify permissions when interacting with GCP APIs. IAM can only provide access to the GKE API, which does not take into account namespaces.
As you mentioned, RBAC is your option for more granular permissions within the cluster
If I got your point correctly that:
The IAM roles for a GKE kubernetes cluster are very simple, "Admin, Read/Write, Read".
But you need more fine-grained control over the kubernetes cluster.
In this case:
There's a new "Alpha" feature in Google Cloud's IAM which wasn't available previously.
Under IAM > Roles
You can now create custom IAM roles with your own subset of permissions.
You can create a minimal role which allows for example gcloud container clusters get-credentials to work, but nothing else, allowing permissions within the kubernetes cluster to be fully managed by RBAC.
It will allow you to get more fine-grained access configurations for kubernetes cluster.
Our infrastructure currently has 2 Kubernetes Cluster, with one Cluster (cluster-1) creating pods in another cluster (cluster-2). Since we are on kubernetes1.7.x, we are able to make this work.
However, with 1.8 Kubernetes added support for RBAC as a result of which we cannot create pods in the new cluster anymore.
We already added support for Service Accounts and made sure that RoleBindings are properly set-up. But the main issue is that the service-account is not propagated outside of the cluster (and rightly so). The user that cluster-2 receives the request is called 'client', so when we added a RoleBinding with 'client' as a User, everything worked.
This is most certainly not the correct solution, as now any cluster that talks to Kubernetes API server can create a pod.
Is there support for RBAC that works cross cluster? Or, is there a way to propagate the service info through to the cluster we want to create the pods in?
P.S.: Our Kubernetes cluster are currently on GKE. But, we would like this to work on all Kubernetes-engine.
Your cluster-1 SA uses a kubecfg (for cluster-2) which resolves to the user "client". The only way to solve this is to generate a kubecfg (for cluster-2) with an identity associated (cert/token) for your cluster-1 SA. Lot of ways to do that: https://kubernetes.io/docs/admin/authentication/
Simplest way is to create an identical SA in cluster-2 and use its token in the kubecfg in cluster-1. Give RBAC only to that SA.
So the question is in the title.
I'm wondering what is the purpose of cluster-name parameter from kubernetes controller_manager?
Reading the code of the cluster manager you'll find that the cluster-name is passed down to the service controller and the persistent volume controller which then pass them down to their related objects (Load balancers, persistent volumes, ...).
In both cases these pass down the cluster name to the related cloud provider (see interface) which could use it within the naming of their provider specific objects. This makes sense in case you run more than one Kubernetes cluster side by side on the same provider.
The GCE and AWS cloud providers do this for example, some others don't.
So having two clusters with the same cluster name configuration for the controller manager could therefore cause issues due to name collisions within the objects created by the cloud provider.
lets you create more than one cluster and helps you distinguish between them. Most folks just use kubernetes (which is the default) When you setup your kubectl you provide it as well.
this is from the k8s site # https://kubernetes.io/docs/getting-started-guides/scratch/
You should pick a name for your cluster. Pick a short name for each cluster which is unique from future cluster names. This will be used in several ways:
by kubectl to distinguish between various clusters you have access to. You will probably want a second one sometime later, such as for testing new Kubernetes releases, running in a different region of the world, etc.
Kubernetes clusters can create cloud provider resources (e.g. AWS ELBs) and different clusters need to distinguish which resources each created. Call this CLUSTER_NAME