I installed JupyterHub with Helm on an EKS cluster, although the EKS service role can be correctly assumed by the hub pod (whose name starts with "hub-"), the user pods (starting with "jupyter-USERNAME") seem can't assume the role. Because of this, when a user uses boto3 in her notebook, she is asked for her IAM user credentials, which is not ideal.
All other pods in that namespace can assume the EKS role automatically except for the JupyterHub user pods. May I have your advice on this please? Thanks everyone for your time and consideration.
You probably need to configure the service_account for your kube_spawner.
https://jupyterhub-kubespawner.readthedocs.io/en/latest/spawner.html
Related
I recently built an EKS cluster w/ Terraform. In doing that, I Created 3 different roles on the cluster, all mapped to the same AWS IAM Role (the one used to create the cluster)
Now, when I try to manage the cluster, RBAC seems to use the least privileged of these (which I made a view role) that only has read-only access.
Is there anyway to tell config to use the admin role instead of view?
I'm afraid I've hosed this cluster and may need to rebuild it.
Some into
You don't need to create a mapping in K8s for an IAM entity that created an EKS cluster, because by default it will be mapped to "system:masters" K8s group automatically. So, if you want to give additional permissions in a K8s cluster, just map other IAM roles/users.
In EKS, IAM entities are used authentication and K8s RBAC are for authorization purposes. The mapping between them is set in aws-auth configMap in kube-system namespace,
Back to the question
I'm not sure, why K8s may have mapped that IAM user to the least privileged K8s user - it may be the default behaviour (bug?) or due to the mapping record (for view perms) being later in the CM, so it just re-wrote the previous mapping.
Any way, there is no possibility to specify a K8s user to use with such mapping.
Also, if you used eksctl to spin up the cluster, you may try creating a new mapping as per docs, but not sure if that will work.
Some reading reference: #1, #2
I'm still new to Kubernetes and trying to gain some expertise. I have a Cluster On-Prem and have been trying to setup CI/CD using ArgoCD. When I deploy the application, I get the below error message. Any ideas what this could be?
deployments.apps "account-deployment" is forbidden: user "system:serviceaccount:argocd:argocd-application-controller" is not an admin and does not have permissions to use extra kernel capabilities for resource account-deployment
The argocd service account, which are permissions set to a pod over the cluster API's is lacking permissions cluster-wide, take a look at roles, cluster roles, and role bindings, which is a way to bind permissions to a user/service account
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
We are using a K8s cluster but we don't have cluster level permissions, so we can only create Role and ServiceAccount on our namespaces and we need install a service mesh solution (Istio or Linkerd) only in our namespaces.
Our operation team will agree to apply CRDs on the cluster for us, so that part is taken care of, but we can’t request for cluster admin permissions to set up the service mesh solutions.
We think that it should be possible to do this if we change all the ClusterRoles and ClusterRoleBindings to Roles and RoleBindings on Helm charts.
So, the question is: how can we set up a service mesh using Istio or Linkerd without having admin permission on the K8s cluster?
Linkerd cannot function without certain ClusterRoles, ClusterRoleBindings, etc. However, it does provide a two-stage install mode where one phase corresponds to "cluster admin permissions needed" (aka give this to your ops team) and the other "cluster admin permissions NOT needed" (do this part yourself).
The set of cluster admin permissions needed is scoped down to be as small as possible, and can be inspected (The linkerd install config command simply outputs it to stdout.)
See https://linkerd.io/2/tasks/install/#multi-stage-install for details.
For context, we originally tried to have a mode that required no cluster-level privileges, but it became clear we were going against the grain with how K8s operates, and we ended up abandoning that approach in favor of making the control plane cluster-wide but multi-tenant.
Our infrastructure currently has 2 Kubernetes Cluster, with one Cluster (cluster-1) creating pods in another cluster (cluster-2). Since we are on kubernetes1.7.x, we are able to make this work.
However, with 1.8 Kubernetes added support for RBAC as a result of which we cannot create pods in the new cluster anymore.
We already added support for Service Accounts and made sure that RoleBindings are properly set-up. But the main issue is that the service-account is not propagated outside of the cluster (and rightly so). The user that cluster-2 receives the request is called 'client', so when we added a RoleBinding with 'client' as a User, everything worked.
This is most certainly not the correct solution, as now any cluster that talks to Kubernetes API server can create a pod.
Is there support for RBAC that works cross cluster? Or, is there a way to propagate the service info through to the cluster we want to create the pods in?
P.S.: Our Kubernetes cluster are currently on GKE. But, we would like this to work on all Kubernetes-engine.
Your cluster-1 SA uses a kubecfg (for cluster-2) which resolves to the user "client". The only way to solve this is to generate a kubecfg (for cluster-2) with an identity associated (cert/token) for your cluster-1 SA. Lot of ways to do that: https://kubernetes.io/docs/admin/authentication/
Simplest way is to create an identical SA in cluster-2 and use its token in the kubecfg in cluster-1. Give RBAC only to that SA.
I’m investigating this letsencrypt controller (https://github.com/tazjin/kubernetes-letsencrypt).
It requires pods have permission to make changes to records in Cloud DNS. I thought with the pods running on GKE I’d get that access with the default service account, but the requests are failing. What do I need to do do to allow the pods access to Cloud DNS?
The Google Cloud DNS API's changes.create call requires either the https://www.googleapis.com/auth/ndev.clouddns.readwrite or https://www.googleapis.com/auth/cloud-platform scope, neither of which are enabled by default on a GKE cluster.
You can add a new Node Pool to your cluster with the DNS scope by running:
gcloud container node-pools create np1 --cluster my-cluster --scopes https://www.googleapis.com/auth/ndev.clouddns.readwrite
Or, you can create a brand new cluster with the scopes you need, either by passing the --scopes flag to gcloud container clusters create, or in the New Cluster dialog in Cloud Console, click "More", and set the necessary scopes to "Enabled".