AWS assume role using an AWS EKS IRSA role - kubernetes

I am trying to assume a role from an eks container that has an IRSA role attached to it. however when I assume a role, I can see that the container is using the ec2 iam role instead or the IRSA role.
[I] ✦2 ➜ kubectl get sa web -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxxxx:role/web
assume role from inside the container
CREDENTIALS=$(aws sts assume-role --role-arn "$ROLE_ARN" --role-session-name "$ROLE_SESSION_NAME")
An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::xxxxxxxxxx:assumed-role/eks-instance/role/i-02xxxxxxxxxx is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::yyyyyyyyyy:role/app
Cleaning up file based variables
aws sts get-caller-identity returns the ec2 iam role.

Related

Give cluster admin access to EKS worker nodes

We have a EKS cluster running the 1.21 version. We want to give admin access to worker nodes. We modified the aws-auth config map and added "system:masters" for eks worker nodes role. Below is the code snipped for the modified configmap.
data:
mapRoles: |
- groups:
- system:nodes
- system:bootstrappers
- system:masters
rolearn: arn:aws:iam::686143527223:role/terraform-eks-worker-node-role
username: system:node:{{EC2PrivateDNSName}}
After adding this section, the EKS worker nodes successfully got admin access to the cluster. But in the EKS dashboard, the nodegroups are in a degraded state. It shows the below error in the Health issues section. Not able to update cluster due to this error. Please help.
Your worker nodes do not have access to the cluster. Verify if the node instance role is present and correctly configured in the aws-auth ConfigMap.
The error message indicates that the instance role (terraform-eks-worker-node-role) is lack of AWS managed policy AmazonEKSWorkerNodePolicy. Here's a troubleshooting guide for reference.
To provide cluster admin to your agent pod; bind the cluster-admin role to your agent pod serviceaccount:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: <of your own>
namespace: <where your agent runs>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: <use by your agent pod>
namespace: <where your agent runs>
During an issue such as this one, a quick way to get more details is by looking at the "Health issues" section on the EKS service page. As can be seen in the attached screenshot below, which has the same error in the description, there is an access permissions issue with the specific role eks-quickstart-test-ManagedNodeInstance.
The aforementioned role lacks permissions to the cluster and the same can be updated in the aws-auth.yaml configuration as described below:
Run the following command from the role/user which created the EKS cluster:
kubectl get cm aws-auth -n kube-system -o yaml > aws-auth.yaml
Add the role along with the required permissions such as system:masters in the mapRoles: section as shown below:
mapRoles: |
- rolearn: arn:aws:iam::<AWS-AccountNumber>:role/eks-quickstart-test-ManagedNodeInstance
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- system:masters
Apply the updates to the cluster with the command:
kubectl apply -f aws-auth.yaml
This should resolve the permission issues and your cluster nodes should be visible as healthy and ready for pods to be scheduled.

Can't deploy bitnami/rabbitmq Helm Chart on GKE, permission to create role is required

Introduction :
I am trying to deploy a RabbitMq Helm Chart to GKE, with my Gitlab CI/CD pipeline. The command I use to install my chart is:
helm upgrade --install rabbitmq --create-namespace --namespace kubi-app-main -f envs/main/rabbitmq/rabbitmq.yaml bitnami/rabbitmq
envs/rabbitmq/rabbitmq.yaml:
auth:
username: user
password: password
# The used vhost is default-vhost
extraConfiguration: |-
default_vhost = default-vhost
default_permissions.configure = .*
default_permissions.read = .*
default_permissions.write = .*
The Gitlab job first connect to GKE cluster with gcloud:
echo "$SERVICE_ACCOUNT_KEY" > key.json
gcloud auth activate-service-account --key-file=key.json
gcloud config set project project-kubi-app
gcloud container clusters get-credentials cluster-1 --zone europe-west9-a --project project-kubi-app
The Issue:
But the helm upgrade fails:
Error: roles.rbac.authorization.k8s.io is forbidden: User "kubiapp-cluster-sa#project-kubi-app.iam.gserviceaccount.com" cannot create resource "roles" in API group "rbac.authorization.k8s.io" in the namespace "kubi-app-main": requires one of ["container.roles.create"] permission(s).
Checking the roles of the user (service account) on the project
gcloud projects get-iam-policy project-kubi-app --flatten="bindings[].members" --format='table(bindings.role)' --filter="bindings.members:kubiapp-cluster-sa#project-kubi-app.iam.gserviceaccount.com"
This will return ROLE roles/editor, meaning that my service account has an editor role on the project.
From what I understand, the service account kubiapp-cluster-sa#project-kubi-app.iam.gserviceaccount.com has the editor role on the project project-kubi-app.
BUT the service account that I am using can't create a role in the namespace kubi-app-main.
I don't understand the use of this role, but it's origin is from the RabbitMq Helm Chart.
From the RabbitMq Helm Chart:
...
# Source: rabbitmq/templates/rolebinding.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rabbitmq-endpoint-reader
namespace: "kubi-app-main"
labels:
app.kubernetes.io/name: rabbitmq
helm.sh/chart: rabbitmq-10.1.8
app.kubernetes.io/instance: rabbitmq
app.kubernetes.io/managed-by: Helm
subjects:
- kind: ServiceAccount
name: rabbitmq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rabbitmq-endpoint-reader
...
---
EDIT:
I have changed my service account role to Owner and now it works, but I would like to know the role required to create other roles.
roles/editor allows you to create/update/delete resources for most/many services, but does not include the permission to perform any of those operations on roles in general. roles/owner, on the other hand, does as it essentially makes you an admin of (almost every) resource.
For GKE, the usual role required to create/modify/update roles within the cluster is roles/container.clusterAdmin. Check out GKE roles.

kubernetes external secrets on GKE , Permission error

I install kubernetes external secrets with helm, on GKE.
GKE: 1.16.15-gke.6000 on asia-northeast1
helm app version 6.2.0
using Workload Identity as document written
For workload identity,the service account I bind as below command (my-secrets-sa#$PROJECT.iam.gserviceaccount.com) has SecretManager.admin role, which seems necessary for using google secrets manager
gcloud iam service-accounts add-iam-policy-binding --role roles/iam.workloadIdentityUser --member "serviceAccount:$CLUSTER_PROJECT.svc.id.goog[$SECRETS_NAMESPACE/kubernetes-external-secrets]" my-secrets-sa#$PROJECT.iam.gserviceaccount.com
Workload identity looks set correctly, because checking service account in pod on GKE shows correct serviceaccount
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#enable_workload_identity_on_a_new_cluster
create a pod in cluster and check auth in it. it shows my-secrets-sa#$PROJECT.iam.gserviceaccount.com
$ kubectl run -it --image google/cloud-sdk:slim --serviceaccount ksa-name --namespace k8s-namespace workload-identity-test
$ gcloud auth list
But even if creating externalsecret, externalsecret shows error
ERROR, 7 PERMISSION_DENIED: Permission 'secretmanager.versions.access' denied for resource 'projects/project-id/secrets/my-gsm-secret-name/versions/latest' (or it may not exist).
secret my-gsm-secret-name itself exist in secretmanager, so it should not "not exist".
Also permission must be correctly set by workload identity.
it's the externalsecret I defined.
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: gcp-secrets-manager-example # name of the k8s external secret and the k8s secret
spec:
backendType: gcpSecretsManager
projectId: my-gsm-secret-project
data:
- key: my-gsm-secret-name # name of the GCP secret
name: my-kubernetes-secret-name # key name in the k8s secret
version: latest # version of the GCP secret
property: value # name of the field in the GCP secret
Has anyone had similar problem before ?
Thank you
whole command
create a cluster with workload-pool.
$ gcloud container clusters create cluster --region asia-northeast1 --node-locations asia-northeast1-a --num-nodes 1 --preemptible --workload-pool=my-project.svc.id.goog
create kubernetes service account.
$ kubectl create serviceaccount --namespace default ksa
binding kubernetes service account & service account
$ gcloud iam service-accounts add-iam-policy-binding
--role roles/iam.workloadIdentityUser
--member "serviceAccount:my-project.svc.id.goog[default/ksa]"
my-secrets-sa#my-project.iam.gserviceaccount.com`
add annotation
$ kubectl annotate serviceaccount
--namespace default
ksa
iam.gke.io/gcp-service-account=my-secrets-sa#my-project.iam.gserviceaccount.com
install with helm
$ helm install my-release external-secrets/kubernetes-external-secrets
create external secret
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: gcp-secrets-manager-example # name of the k8s external secret and the k8s secret
spec:
backendType: gcpSecretsManager
projectId: my-gsm-secret-project
data:
- key: my-gsm-secret-name # name of the GCP secret
name: my-kubernetes-secret-name # key name in the k8s secret
version: latest # version of the GCP secret
property: value # name of the field in the GCP secret
$ kubectl apply -f excternal-secret.yaml
I noticed that I had used different kubernetes service account.
When installing helm, new kubernetes service account my-release-kubernetes-external-secrets was created, and service/pods must be working on this service account.
So I should bind my-release-kubernetes-external-secrets & google service account.
Now, it works well.
Thank you #matt_j #norbjd

Kubernetes read-only context

I have full admin access to a GKE cluster, but I want to be able to create a kubernetes context with just read only privileges. This way I can prevent myself from accidentally messing with the cluster. However, I still want to be able to switch into a mode with full admin access temporarily when I need to make changes (I would probably use cloud shell for this to fully distinguish the two)
I haven't much docs about this - it seems I can set up roles based on my email but not have two roles for one user.
Is there any way to do this? Or any other way to prevent fat-finger deleting prod?
There are a few ways to do this with GKE. A context in your KUBECONFIG consists of a cluster and a user. Since you want to be pointing at the same cluster, it's the user that needs to change. Permissions for what actions users can perform on various resources can be controlled in a couple ways, namely via Cloud IAM policies or via Kubernetes RBAC. The former applies project-wide, so unless you want to create a subject that has read-only access to all clusters in your project, rather than a specific cluster, it's preferable to use the more narrowly-scoped Kubernetes RBAC.
The following types of subjects can authenticate with a GKE cluster and have Kubernetes RBAC policies applied to them (see here):
a registered (human) GCP user
a Kubernetes ServiceAccount
a GCloud IAM service account
a member of a G Suite Google Group
Since you're not going to register another human to accomplish this read-only access pattern and G Suite Google Groups are probably overkill, your options are a Kubernetes ServiceAccount or a GCloud IAM service account. For this answer, we'll go with the latter.
Here are the steps:
Create a GCloud IAM service account in the same project as your Kubernetes cluster.
Create a local gcloud configuration to avoid cluttering your default one. Just as you want to create a new KUBECONFIG context rather than modifying the user of your current context, this does the equivalent thing but for gcloud itself rather than kubectl. Run the command gcloud config configurations create <configuration-name>.
Associate this configuration with your GCloud IAM service account: gcloud auth activate-service-account <service_account_email> --key-file=</path/to/service/key.json>.
Add a context and user to your KUBECONFIG file so that you can authenticate to your GKE cluster as this GCloud IAM service account as follows:
contexts:
- ...
- ...
- name: <cluster-name>-read-only
context:
cluster: <cluster-name>
user: <service-account-name>
users:
- ...
- ...
- name: <service-account-name>
user:
auth-provider:
name: gcp
config:
cmd-args: config config-helper --format=json --configuration=<configuration-name>
cmd-path: </path/to/gcloud/cli>
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
Add a ClusterRoleBinding so that this subject has read-only access to the cluster:
$ cat <<EOF | kubectl apply -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: <any-name>
subjects:
- kind: User
name: <service-account-email>
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
EOF
Try it out:
$ kubectl use-context <cluster-name>-read-only
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
Error from server (Forbidden): namespaces is forbidden: User "<service-account-email>" cannot create resource "namespaces" in API group "" at the cluster scope: Required "container.namespaces.create" permission.
$ kubectl use-context <original-namespace>
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
namespace/foo created

Anonymous access to Kibana Dashboard (K8s Cluster)

I deployed HA K8s Cluster with 3 masters & 2 worker Nodes. I access my K8s Dashboard through kubectl client(local), kubectl proxy. My K8s Dashboard is accessed through tokens by some RBAC users, where they have limited access on namespaces & Cluster admin users. I want to give anonymous access to all my users for viewing the deployment logs i.e., to Kibana Dashboard(Add-on). Can anyone help me regarding this?
Below, I specified the required artifacts that are running on my cluster with their versions:
K8s version: 1.8.0
kibana: 5.6.4
elasticsearch-logging : 5.6.4
You can try creating a ClusterRoleBinding for some specific users. In my case, I am using LDAP authentication for accessing the Kubernetes API. I have assigned admin privileges to some users and readonly access to some specific users. Refer to the ClusterRoleBinding yaml below:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: oidc-readonly-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:aggregate-to-view
subjects:
- kind: User
name: https://dex.domain.com/dex#user1#domain.com
I am using dex tool for the LDAP authentication. You can try giving the RBAC username directly.