How to add or introduce a kubernetes normal user? - kubernetes

I saw it on offical doc, but I don't know how to add or introduce a normal user outside kubernetes clusters. And I searched a lot about normal user in kubernetes but nothing useful.
I know it's different from serviceaccount and we cannot add a normal user through Kubernetes API.
Any idea about how to add or introduce a normal user to kubernetes cluster and what's normal user for?

See "Comparing Kubernetes Authentication Methods" by Etienne Dilocker
A possible solution is the x509 client certs:
Advantages
operating the Kubernetes cluster and issuing user certificates is decoupled
much more secure than basic authentication
Disadvantages
x509 certificates tend to have a very long lifetime (months or years). So, revoking user access is nearly impossible. If we instead choose to issue short-lived certificates, the user experience drops, because replacing certificates involves some effort.
But Etienne recommends OpenID:
Wouldn’t it be great if we could have short-lived certificates or tokens, that are issued by a third-party, so there is no coupling to the operators of the K8s cluster.
And at the same time all of this should be integrated with existing enterprise infrastructure, such as LDAP or Active Directory.
This is where OpenID Connect (OIDC) comes in.
For my example, I’ve used Keycloak as a token issuer. Keycloak is both a token issuer and an identity provider out-of-the box and quite easy to spin up using Docker.
To use RBAC with that kind of authentication is not straight-forward, but possible.
See "issue 118; Security, auth and logging in"
With 1.3 I have SSO into the dashboard working great with a reverse proxy and OIDC/OAuth2. I wouldn't create an explicit login screen, piggy back off of the RBAC model and the Auth model that is already supported. It would be great to have something that says who the logged in user is though.
Note that since 1.3, there might be simpler solution.
The same thread includes:
I have a prototype image working that will do what I think you're looking for: https://hub.docker.com/r/mlbiam/openunison-k8s-dashboard/
I removed all the requirements for user provisioning and stripped it down to just:
reverse proxy
integration with openid connect
display the user's access token
simple links page
It includes the role binding:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
nonResourceURLs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-binding
subjects:
- kind: Group
name: admin
- kind: ServiceAccount
name: default
namespace: kube-system
- kind: ServiceAccount
name: openunison
namespace: default
roleRef:
kind: ClusterRole
name: admin-role
Again, this was specific to the dashboard RBAC access, and has since been improved with PR 2206 Add log in mechanism (to dashboard).
It still can give you some clues in order to link a regular user to kubernetes RBAC.

Related

Role definition for Kubernetes user to work on single namespace

I am currently facing the current situation. I want to give users access to individual namespaces, such that they can
create and deploy ressources with Helm charts (for instance, from Bitnami)
On the other hand the users are not supposed to
create/retrieve/modify/delete RBAC settings like ServiceAccounts, RoleBindings, Roles, NetworkPolicies
get hands on secrets associated to ServiceAccounts
Of course, the crucial thing is to define the best Role for it here. Likely, the following is not the best idea here:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: role
namespace: example-namespace
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
Hence, it would be great if you could come along with some sensible approach that the users can work on it as freely as possible, yet do not get hands on some more "dangerous" resources.
In essence, I want to follow the workflow outlined here (https://jeremievallee.com/2018/05/28/kubernetes-rbac-namespace-user.html). So what matters most is that individual users in one namespace, cannot read the secrets of the users in the same namespace, such that they cannot authenticate with the credentials of someone else.
In my opinion the following strategy will help:
RBAC to limit access to service accounts of own namespace only.
Make sure automountServiceAccountToken: false in secret and POD level using policies. This helps in protecting secrets when there is a node security breach. The secret will only be available for execution time and will not be stored in the POD.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server
Encrypt secrets stored in ETCD using kms(recommended). But if you dont have a kms provider then you can also choose other providers to ensure minimum security.
https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#providers
Sound like the ClusterRole edit would almost fit your needs.
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
It will allow access to secrets "However, this role allows accessing Secrets and running Pods as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace."

How can I access Microk8s in Read only mode?

I would like to read state of K8s using µK8s, but I don't want to have rights to modify anything. How to achieve this?
The following will give me full access:
microk8s.kubectl Insufficient permissions to access MicroK8s. You can either try again with sudo or add the user digital to the 'microk8s' group:
sudo usermod -a -G microk8s digital sudo chown -f -R digital ~/.kube
The new group will be available on the user's next login.
on Unix/Linux we can just set appropriate file/directory access
permission - just rx, decrease shell limits (like max memory/open
file descriptors), decrease process priority (nice -19). We are
looking for similar solution for K8S
This kind of solutions in Kubernetes are handled via RBAC (Role-based access control). RBAC prevents unauthorized users from viewing or modifying the cluster state. Because the API server exposes a REST interface, users perform actions by sending HTTP requests to the server. Users authenticate themselves by including credentials in the request (an authentication token, username and password, or a client certificate).
As for REST clients you get GET, POST, PUT,DELETE etc. These are send to specific URL paths that represents specific REST API resources (Pods, Services, Deployments and so).
RBAC auth is configured with two groups:
Roles and ClusterRoles - this specify which actions/verbs can be performed
RoleBinding and ClusterRoleBindings - this bind the above roles to a user, group or service account.
As you might already find out the ClusterRole is the one your might be looking for. This will allow to restrict specific user or group against the cluster.
In the example below we are creating ClusterRole that can only list pods. The namespace is omitted since ClusterRoles are not namepsaced.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-viewer
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
This permission has to be bound then via ClusterRoleBinding :
apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager" group to list pods in any namespace.
kind: ClusterRoleBinding
metadata:
name: list-pods-global
subjects:
- kind: Group
name: manager # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-viewer
apiGroup: rbac.authorization.k8s.io
Because you don't have the enough permissions on your own you have to reach out to appropriate person who manage those to create user for you that has the ClusterRole: View. View role should be predefined already in cluster ( kubectl get clusterrole view)
If you wish to read more Kubernetes docs explains well its whole concept of authorization.

How do I relate GCP users to GKE Kubernetes users, for authentication and subsequent authorization?

I am using GKE Kubernetes in GCP. I am new to GCP, GKE, and kubectl. I am trying to create new Kubernetes users in order to assign them ClusterRoleBindings, and then login (kubectl) as those users.
I do not see the relationship between GCP users and Kubernetes "users" (I do understand there's no User object type in Kubernetes).
According to https://cloud.google.com/kubernetes-engine/docs/concepts/security-overview , Kubernetes user accounts are Google Accounts.
Accordingly, I created some Google accounts and then associated them with my GCP account via IAM. I can see these accounts fine in IAM.
Then I performed gcloud auth login on those new users, and I could see them in gcloud auth list. I then tried accessing gcloud resources (gcloud compute disks list) as my various users. This worked as expected - the GCP user permissions were respected.
I then created a Kubernetes UserRole. Next step was to bind those users to those Roles, with a UserRoleBinding.
ClusterRole.yaml (creates fine):
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: cluster-role-pod-reader-1
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
ClusterRoleBinding.yaml (creates fine):
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cluster-role-binding-pod-reader-1
subjects:
- kind: User
name: MYTESTUSER#gmail.com # not real userid
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-role-pod-reader-1
apiGroup: rbac.authorization.k8s.io
In Kubernetes, I could create bindings, but my first problem is that I could create a UserRoleBinding between an existing UserRole and a non-existent user. I would have thought that would fail. It means I'm missing something important.
My second problem is I do not know how to login to kubectl as one of the new users.
Overall I'm missing the connection between GCP/IAM users and GKE users. Help would be much appreciated!
Kubernetes doesn't have a user database. Users live outside the cluster and are usually controlled by the cloud provider.
If you're using GKE, the users are controlled by the GCP IAM. Therefore you can't list users with kubectl.
You can create service accounts though. However, it is important to understand the difference between service accounts and users. Users are for real people while service accounts are for processes inside and outside kubernetes.
When you create a ClusterRoleBinding this means to kubernetes:
If a user with the username MYTESTUSER#gmail.com enters the cluster, bind him to the ClusterRole cluster-role-pod-reader-1
To use kubernetes with the GCP IAM users, you have to do the following:
add the user to IAM
add him to the role roles/container.viewer
create the RoleBinding/ClusterRoleBinding of your choice
You can list the respective IAM roles (not to be mistaken with RBAC roles) with this command:
gcloud iam roles list | grep 'roles/container\.' -B2 -A2
With the principle of least privilige in mind you should grant your user only the minimal rights to login into the cluster. The other IAM roles (except for roles/container.clusterAdmin) will automatically grant access with higher privileges to objects inside the all clusters of your project.
RBAC allows just the addition of privileges therefore you should choose the IAM role with the least privileges and add the required privileges via RBAC on top.

Limit access to a a kubernetes cluster on google cloud platform

We have created 2 different Kubernetes clusters on Google Cloud Platform, one for Development and the other for Production.
Our team members have the "editor" role (so they can create, update delete and list pods)
We want to limit access to the production cluster by using RBAC authorization provided by Kubernetes. I've created a ClusterRole and a ClusterBindingRole, as follow:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: prod-all
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: access-prod-all
subjects:
- kind: User
name: xxx#xxx.com
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: prod-all
apiGroup: rbac.authorization.k8s.io
But the users already have an "editor" role (complete access to all the clusters). So I don't know if we should assign a simple "viewer" role than extend it using kubernetes RBAC.
I also want to know if there is a way to completely hide the production cluster from some users. (our clusters are in the same project)
If you are in a initial phase or you can manage to move your testing cluster I would advise you to set up the clusters in two different projects.
This will create two completely different environments and you will not have any kind of issues in the future and you automatically forbid the access to half of your resources and you don't have to fear that something is misconfigured and your production is still reachable. When you need to grant something you simply add that person to the project with the corresponding role
Because maybe you succeed in blocking the cluster access using IAM and RBAC, but then you would need to deal with securing the access to the networking components, LoadBalacers, Firewalls, to the Compute Engine ecc
Maybe at the beginning it is a lot of work, but in the long run it will save you a lot of issues.
This is the link for the official Google Cloud documentation about how to set up two cluster of which one is in production.

Only enable ServiceAccounts for some pods in Kubernetes

I use the Kubernetes ServiceAccount plugin to automatically inject a ca.crt and token in to my pods. This is useful for applications such as kube2sky which need to access the API Server.
However, I run many hundreds of other pods that don't need this token. Is there a way to stop the ServiceAccount plugin from injecting the default-token in to these pods (or, even better, have it off by default and turn it on explicitly for a pod)?
As of Kubernetes 1.6+ you can now disable automounting API credentials for a particular pod as stated in the Kubernetes Service Accounts documentation
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
Right now there isn't a way to enable a service account for some pods but not others, although you can use ABAC to for some service accounts to restrict access to the apiserver.
This issue is being discussed in https://github.com/kubernetes/kubernetes/issues/16779 and I'd encourage you to add your use can to that issue and see when it will be implemented.