Limit access to a a kubernetes cluster on google cloud platform - kubernetes

We have created 2 different Kubernetes clusters on Google Cloud Platform, one for Development and the other for Production.
Our team members have the "editor" role (so they can create, update delete and list pods)
We want to limit access to the production cluster by using RBAC authorization provided by Kubernetes. I've created a ClusterRole and a ClusterBindingRole, as follow:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: prod-all
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: access-prod-all
subjects:
- kind: User
name: xxx#xxx.com
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: prod-all
apiGroup: rbac.authorization.k8s.io
But the users already have an "editor" role (complete access to all the clusters). So I don't know if we should assign a simple "viewer" role than extend it using kubernetes RBAC.
I also want to know if there is a way to completely hide the production cluster from some users. (our clusters are in the same project)

If you are in a initial phase or you can manage to move your testing cluster I would advise you to set up the clusters in two different projects.
This will create two completely different environments and you will not have any kind of issues in the future and you automatically forbid the access to half of your resources and you don't have to fear that something is misconfigured and your production is still reachable. When you need to grant something you simply add that person to the project with the corresponding role
Because maybe you succeed in blocking the cluster access using IAM and RBAC, but then you would need to deal with securing the access to the networking components, LoadBalacers, Firewalls, to the Compute Engine ecc
Maybe at the beginning it is a lot of work, but in the long run it will save you a lot of issues.
This is the link for the official Google Cloud documentation about how to set up two cluster of which one is in production.

Related

How can I access Microk8s in Read only mode?

I would like to read state of K8s using µK8s, but I don't want to have rights to modify anything. How to achieve this?
The following will give me full access:
microk8s.kubectl Insufficient permissions to access MicroK8s. You can either try again with sudo or add the user digital to the 'microk8s' group:
sudo usermod -a -G microk8s digital sudo chown -f -R digital ~/.kube
The new group will be available on the user's next login.
on Unix/Linux we can just set appropriate file/directory access
permission - just rx, decrease shell limits (like max memory/open
file descriptors), decrease process priority (nice -19). We are
looking for similar solution for K8S
This kind of solutions in Kubernetes are handled via RBAC (Role-based access control). RBAC prevents unauthorized users from viewing or modifying the cluster state. Because the API server exposes a REST interface, users perform actions by sending HTTP requests to the server. Users authenticate themselves by including credentials in the request (an authentication token, username and password, or a client certificate).
As for REST clients you get GET, POST, PUT,DELETE etc. These are send to specific URL paths that represents specific REST API resources (Pods, Services, Deployments and so).
RBAC auth is configured with two groups:
Roles and ClusterRoles - this specify which actions/verbs can be performed
RoleBinding and ClusterRoleBindings - this bind the above roles to a user, group or service account.
As you might already find out the ClusterRole is the one your might be looking for. This will allow to restrict specific user or group against the cluster.
In the example below we are creating ClusterRole that can only list pods. The namespace is omitted since ClusterRoles are not namepsaced.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-viewer
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
This permission has to be bound then via ClusterRoleBinding :
apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager" group to list pods in any namespace.
kind: ClusterRoleBinding
metadata:
name: list-pods-global
subjects:
- kind: Group
name: manager # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-viewer
apiGroup: rbac.authorization.k8s.io
Because you don't have the enough permissions on your own you have to reach out to appropriate person who manage those to create user for you that has the ClusterRole: View. View role should be predefined already in cluster ( kubectl get clusterrole view)
If you wish to read more Kubernetes docs explains well its whole concept of authorization.

How to secure kubernetes secrets?

I am trying to avoid kubernetes secrets view-able by any user.
I tried sealed secrets, but that is just hiding secrets to be stored in version control.
As soon as I apply that secret, I can see the secret using the below command.
kubectl get secret mysecret -o yaml
This above command is still showing base64 encoded form of secret.
How do I avoid someone seeing the secret ( even in base64 format) with the above simple command.
You can use Hashicrop Vault or kubernetes-external-secrets (https://github.com/godaddy/kubernetes-external-secrets).
Or if you just want to restrict only, then you should create a read-only user and restrict the access for the secret for the read-only user using role & role binding.
Then if anyone tries to describe secret then it will throw access denied error.
Sample code:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: test-secrets
namespace: default
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- create
- delete
- apiGroups:
- ""
resources:
- pods/exec
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-secrets
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: test-secrets
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: demo
The above role has no access to secrets. Hence the demo user gets access denied.
There is no way to accomplish this with Kubernetes internal tools. You will always have to rely on a third-party tool.
Hashicorps Vault is one very often used solution, which is very powerful and supports some very nice features, like Dynamic Secrets or Envelope Encryption. But it can also get very complex in terms of configuration. So you need to decide for yourself what kind of solution you need.
I would recommend you using Sealed-Secrets. It encrypts your secrets and you can push the encrypted secrets safely in your repository. It has not such a big feature list, but it does exactly what you described.
You can Inject Hashicrop Vault secrets into Kubernetes pods via Init containers and keep them up to date with a sidecar container.
More details here https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar/

How To login to azure kubernetes cluster?

How can we login to a AKS cluster created , by using service account?
We are asked to execute kubectl create clusterrolebinding add-on-cluster-admin ......... but we are not aware how to use this and login to the created cluster in Azure
you can use this quick start tutorial: https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough#connect-to-the-cluster
basically you need to install kubectl:
az aks install-cli
and pull credentials for AKS:
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
As per ducemtaion:
User accounts vs service accounts
Kubernetes distinguishes between the concept of a user account and a service account for a number of reasons:
User accounts are for humans. Service accounts are for processes, which run in pods.
User accounts are intended to be global. Names must be unique across all namespaces of a cluster, future user resource will not be namespaced. Service accounts are namespaced.
Typically, a cluster’s User accounts might be synced from a corporate database, where new user account creation requires special privileges and is tied to complex business processes. Service account creation is intended to be more lightweight, allowing cluster users to create service accounts for specific tasks (i.e. principle of least privilege).
Auditing considerations for humans and service accounts may differ.
A config bundle for a complex system may include definition of various service accounts for components of that system. Because service accounts can be created ad-hoc and have namespaced names, such config is portable.
As an example:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: User
name: manager
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
you can find other helpful information here, in official kubernetes documentation, and Azure Kubernetes Service AKS

Spring Cloud Kubernetes: What are cluster-reader permissions?

According to Spring Cloud Kubernetes docs, in order to discover services/pods in RBAC enabled Kubernetes distros:
you need to make sure a pod that runs with spring-cloud-kubernetes has access to the Kubernetes API. For any service accounts you assign to a deployment/pod, you need to make sure it has the correct roles. For example, you can add cluster-reader permissions to your default service account depending on the project you’re in.
What are cluster-reader permissions in order to discover services/pods?
Error I receiving is:
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://x.x.x.x/api/v1/namespaces/jx-staging/services.
Message: Forbidden!Configured service account doesn't have access.
Service account may have been revoked. services is forbidden:
User "system:serviceaccount:jx-staging:default" cannot list services in the namespace "jx-staging"
Read endpoints and services seems to be a bare minimum for Spring Cloud Kubernetes to discover pods and services.
Example adds permissions to default service account in default namespace.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-read-role
rules:
- apiGroups:
- ""
resources:
- endpoints
- pods
- services
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-read-rolebinding
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: cluster-read-role
apiGroup: rbac.authorization.k8s.io
Kubernetes generally categorizes roles into two types:
Role: This are specific to the namespace to which they are granted
ClusterRole: Applies to the whole cluster, meaning that it applies to all namespaces
So what the Spring Cloud Kubernetes docs mean there is that in order to be able to read properly discover services/pods across all namespaces, the ServiceAccount which will be associated with the application should have a ClusterRole that allows it to read Pods, Services etc.
This part of the Kubernetes docs (which also contains great examples) is a must-read for a general understanding of Kubernetes RBAC.

How to set an IAM user to have specific rights in Kubernetes Cluster on AWS.

I want to allow a user to do things in the Kubernetes cluster for EKS for example: apply deployment, create secrets, create volumes etc.
I'm not sure which role to use for that. I don't want to allow users:
to create clusters, delete clusters, list cluster only perform the Kubernetes operations within the cluster.
As far as I know the permissions to the cluster are performed with Heptio authenticator. I believe I am missing something here but can't figure out what.
This link is the right one to add an AWS IAM user or AWS Role to a given K8S Role.
Let's say that you wanted to create a new K8S Role to only have read permission, called pod-reader
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
After creating the role, you need to give the permission to your IAM user to assume that role. This is easily doable doing:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapUsers: |
- userarn: arn:aws:iam::270870090353:user/franziska_adler
username: iam_user_name
groups:
- pod-reader
More information about K8S RBAC Authorization here
Looks like you have to manually add the users in the config map under the 'mapUsers' item and then run kubectl apply config-map.yml according the aws documentation in section 3. "Add your IAM users, roles, or AWS accounts to the configMap."
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html